Search Results

Search found 31319 results on 1253 pages for 'source engine'.

Page 553/1253 | < Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >

  • Website and file/directory permissions

    - by mathiass
    I've been given a task to fix this one website. One of its issues is that on one page, the images have broken links - the images are not showing, and clicking on the image (i.e. direct link to the image file) results in a 403 (Forbidden) error. I am looking for some feedback on what could be the possible cause. The directory where the images are stored has the following permissions: drwxrws--- www "group" 10240 Aug 2008 "image directory name" I had to hide the names. I checked the page source code, and everything seems to be in place. The rest of the site, and other images outside that image directory are showing fine. I was told that recently there have been some changes to the server. I'm trying to assume that there is no fault in the source code, and the permissions are - or used to be - correct (since the site has been working before, and no recent changes to the site itself have been made). My only thoughts at the moment is that either: a) the directory permission should be: drwxrws--x (executable) for the other users, or b) there is a change in the server settings that I don't know of. Is there anything else I should check?

    Read the article

  • Variable host IP address in iptables rule

    - by DrakeES
    I am running CentOS 6.4 with OpenVZ on my laptop. In order to provide Internet access for the VEs I have to apply the following rule on the laptop: iptables -t nat -A POSTROUTING -j SNAT --to-source <LAPTOP_IP> It works fine. However, I have to work in different places - office, home, partner's office etc. The IP of my laptop is different in those places, so have to alter the rule above each time I change place. I have created a workaround which basically determines the IP and applies the rule: #!/bin/bash IP=$(ifconfig | awk -F':' '/inet addr/&&!/127.0.0.1/{split($2,_," ");print _[1]}') iptables -t nat -A POSTROUTING -j SNAT --to-source $IP The workaround above works. I only still have to execute it manually. Perhaps I could make it a hook executing whenever my laptop obtains an IP address from DHCP - how can I do that? Also, I am just wondering if there is an elegant way of getting it done in the first place - iptables? Maybe there is a syntax allowing to specify "current hardware ip addres" in the rule?

    Read the article

  • Tcp Port Open by Unknown Service

    - by Singularity
    Running openSUSE 11.2 x86_64. Here's what a nmap of my IP provides: PORT STATE SERVICE 23/tcp open telnet 80/tcp open http 2800/tcp open unknown 8008/tcp open http I would like to know How to view What service is causing Port 2800 to be opened? A few search engine results led me to believe that it is supposedly a port opened by a Trojan called "Theef". If it is indeed a Trojan, what can be done to weed it out? Is my desktop's security compromised?

    Read the article

  • Windows 7 just deleted 4 days of work

    - by Mat
    Hey! I'm just a bit about to freak out. I just finished a project and rebooted my computer. It didn't want to boot anymore so I had to use the Windows 7 system repair option. It ran for a minute and then booted up. Now most of my source code from the last 4 days of work is gone! Background: sometimes (most often after installing new software) my notebook won't boot up anymore. It will just show the little Windows 7 flag, but not read from the hard disk anymore. If I hard-abort and reboot then, it asks me whether to start Windows normally (which won't work) or to run "Windows startup repair". If I run it, it does some stuff for about two or three minutes and then I can boot Windows again. Usually after this, .exe files I added to the computer during previous days are gone - but other files so far were not touched. But now, after this happened, a whole bunch of ".as" (ActionScript source files) from my project are gone! Does anyone know where and whether there's a way to recover them?

    Read the article

  • how to export VARs from a subshell to a parent shell?

    - by webwesen
    I have a Korn shell script #!/bin/ksh # set the right ENV case $INPUT in abc) export BIN=${ABC_BIN} ;; def) export BIN=${DEF_BIN} ;; *) export BIN=${BASE_BIN} ;; esac # exit 0 <- bad idea for sourcing the file now these VARs are export'ed only in a subshell, but I want them to be set in my parent shell as well, so when I am at the prompt those vars are still set correctly. I know about . .myscript.sh but is there a way to do it without 'sourcing'? as my users often forget to 'source'. EDIT1: removing the "exit 0" part - this was just me typing without thinking first EDIT2: to add more detail on why do i need this: my developers write code for (for simplicity sake) 2 apps : ABC & DEF. every app is run in production by separate users usrabc and usrdef, hence have setup their $BIN, $CFG, $ORA_HOME, whatever - specific to their apps. so ABC's $BIN = /opt/abc/bin # $ABC_BIN in the above script DEF's $BIN = /opt/def/bin # $DEF_BIN etc. now, on the dev box developers can develop both ABC and DEF at the same time under their own user account 'justin_case', and I make them source the file (above) so that they can switch their ENV var settings back and forth. ($BIN should point to $ABC_BIN at one time and then I need to switch to $BIN=$DEF_BIN) now, the script should also create new sandboxes for parallel development of the same app, etc. this makes me to do it interactively, asking for sandbox name, etc. /home/justin_case/sandbox_abc_beta2 /home/justin_case/sandbox_abc_r1 /home/justin_case/sandbox_def_r1 the other option i have considered is writing aliases and add them to every users' profile alias 'setup_env=. .myscript.sh' and run it with setup_env parameter1 ... parameterX this makes more sense to me now

    Read the article

  • How does the internet protocol handle network card numbers?

    - by Giorgio
    I know that data packets sent over the internet carry the source and destination IP address, so that the protocol can route the data to the correct destination and keep track of the source address of the packet. But what about the network card address? As far as I know, each network card has a unique identification number. Is this also transmitted with a TCP/IP packet? And when a packet is received at its destination, how is the IP address mapped to a network card number? In other words. On the sender part: does the sender store the sender network card number in the IP packets that it is sending? On the receiver part: which component maps the IP address to the receiver's network card number when a packet is received? E.g., in a home network, does the modem / router map the destination IP address of an incoming packet to a network card number and deliver the packet directly to that network card? A link to documentation on these topics would be of great help.

    Read the article

  • [openVPN] server & client on same machine . And multiple VPN servers

    - by HiWorld
    Hello everyone, im stucked configuring openvpn to build a multi vpn connection. like this: CLIENT - VPN1 - VPN2 - INTERNET Well, i already have and know how to done a normal sigle vpn but want to use a chain of vpns, so i explain what i have done and how i did it. ON VPN1. i have 1 openvpn instance running as server( where client connect to) and another as client connecting to VPN2 running as server. { Here comes the problem } when i connect VPN1 as client of VPN2 i cant connect to VPN1 from CLIENT, my question is HOW TO procced with this... Also have another third instance working as server to use VPN1 without chains. ON VPN2. 1 openvpn instance as server where VPN1 will connect and then forward to the NET. Im using TUN interface on configs. And iptables are on this way: VPN1 - openvpn ip server1 : 192.168.6.0 / ip as client of VPN2: 192.168.5.70 iptables -t nat -A POSTROUTING -s 192.168.6.0 -j SNAT --to-source 192.168.5.70 VPN2 - openvpn ip server2 : 192.168.5.0 iptables -t nat -A POSTROUTING -s 192.168.5.0/24 -j SNAT --to-source EXTERNAL_IP_TO_INTERNET Hope someone help me with this. thanks in advance

    Read the article

  • mod_rewrite all but two files causing loop

    - by mpounsett
    I'm trying to set up a web site to allow the creation of a semaphore file to close the site. The logic I want to follow is: when the semaphore file exists and the request is not for /style.css or /favicon.icon show the content of /closed.html I have 1 and 3 working, but my exceptions for 2 result in a processing loop when style.css or favicon.ico are requested. This is my most recent attempt: RewriteEngine on RewriteCond %{REQUEST_URI} !^/style.css RewriteCond %{REQUEST_URI} !^/favicon.ico RewriteCond /usr/local/etc/site/closed -f RewriteRule ^.*$ /closed.html [L] This is in a VirtualHost block, not in a Directory. There is no .htaccess file in play. I have also recently tried this, based on an answer I found elsewhere, but with the same (looping) result: RewriteCond %{REQUEST_URI} ^/style.css [OR] RewriteCond %{REQUEST_URI} ^/favicon.ico RewriteRule ^.*$ - [L] RewriteCond /usr/local/etc/site/closed -f RewriteRule ^.*$ /closed.html [L] I expect a request for /style.css or /favicon.ico to fail to match one of the first two rewrite conditions, which should prevent the URI from being rewritten, which should stop the mod_rewrite iteration. However, mod_rewrite seems to think the URI has been rewritten in those cases, and iterates over the rules again (and again, and again). The above works properly in all cases except for style.css or favicon.ico. In those cases I exceed the loop limits. What am I missing here to cause the rewrite iteration to stop when someone requests style.css or favicon.ico? EDIT: Here's a loglevel 9 example of what happens using the first ruleset when a request arrives for /style.css. This is just the first two iterations.. it continues to loop identically until the limit is reached. 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (2) init rewrite engine with requested uri /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (3) applying pattern '^.*$' to uri '/style.css' 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (4) RewriteCond: input='/style.css' pattern='!^/style.css' => not-matched 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1db0a0/initial] (1) pass through /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (2) init rewrite engine with requested uri /style.css 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (3) applying pattern '^.*$' to uri '/style.css' 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (4) RewriteCond: input='/style.css' pattern='!^/style.css' => not-matched 2001:4900:1044:0:145f:826e:6436:dc1 - - [29/May/2014:15:29:26 +0000] [host.example/sid#80c1c48b0][rid#80c1dd0a0/initial] (1) pass through /style.css

    Read the article

  • Connecting mySQL to MSSQL

    - by user180198
    I need a little advise, I need to sync a DB that is currently on a Server 2008(SQL Server 2005) machine and I use Studio Express to connect to it. I need a way of syncing this DB to mysql that lives on a NAS on the same network: Local: DB Engine on server, named, server\sqlexpress and IP = 10.0.0.201 Target: DB on NAS, named, CISCO-NAS and IP = 10.0.0.182 Will need for this to sync every few mins... I really don't know how to start.

    Read the article

  • Where can these be posted besides the Python Cookbook?

    - by Noctis Skytower
    Whitespace Assembler #! /usr/bin/env python """Assembler.py Compiles a program from "Assembly" folder into "Program" folder. Can be executed directly by double-click or on the command line. Give name of *.WSA file without extension (example: stack_calc).""" ################################################################################ __author__ = 'Stephen "Zero" Chappell <[email protected]>' __date__ = '14 March 2010' __version__ = '$Revision: 3 $' ################################################################################ import string from Interpreter import INS, MNEMONIC ################################################################################ def parse(code): program = [] process_virtual(program, code) process_control(program) return tuple(program) def process_virtual(program, code): for line, text in enumerate(code.split('\n')): if not text or text[0] == '#': continue if text.startswith('part '): parse_part(program, line, text[5:]) elif text.startswith(' '): parse_code(program, line, text[5:]) else: syntax_error(line) def syntax_error(line): raise SyntaxError('Line ' + str(line + 1)) ################################################################################ def process_control(program): parts = get_parts(program) names = dict(pair for pair in zip(parts, generate_index())) correct_control(program, names) def get_parts(program): parts = [] for ins in program: if isinstance(ins, tuple): ins, arg = ins if ins == INS.PART: if arg in parts: raise NameError('Part definition was found twice: ' + arg) parts.append(arg) return parts def generate_index(): index = 1 while True: yield index index *= -1 if index > 0: index += 1 def correct_control(program, names): for index, ins in enumerate(program): if isinstance(ins, tuple): ins, arg = ins if ins in HAS_LABEL: if arg not in names: raise NameError('Part definition was never found: ' + arg) program[index] = (ins, names[arg]) ################################################################################ def parse_part(program, line, text): if not valid_label(text): syntax_error(line) program.append((INS.PART, text)) def valid_label(text): if not between_quotes(text): return False label = text[1:-1] if not valid_name(label): return False return True def between_quotes(text): if len(text) < 3: return False if text.count('"') != 2: return False if text[0] != '"' or text[-1] != '"': return False return True def valid_name(label): valid_characters = string.ascii_letters + string.digits + '_' valid_set = frozenset(valid_characters) label_set = frozenset(label) if len(label_set - valid_set) != 0: return False return True ################################################################################ from Interpreter import HAS_LABEL, Program NO_ARGS = Program.NO_ARGS HAS_ARG = Program.HAS_ARG TWO_WAY = tuple(set(NO_ARGS) & set(HAS_ARG)) ################################################################################ def parse_code(program, line, text): for ins, word in enumerate(MNEMONIC): if text.startswith(word): check_code(program, line, text[len(word):], ins) break else: syntax_error(line) def check_code(program, line, text, ins): if ins in TWO_WAY: if text: number = parse_number(line, text) program.append((ins, number)) else: program.append(ins) elif ins in HAS_LABEL: text = parse_label(line, text) program.append((ins, text)) elif ins in HAS_ARG: number = parse_number(line, text) program.append((ins, number)) elif ins in NO_ARGS: if text: syntax_error(line) program.append(ins) else: syntax_error(line) def parse_label(line, text): if not text or text[0] != ' ': syntax_error(line) text = text[1:] if not valid_label(text): syntax_error(line) return text ################################################################################ def parse_number(line, text): if not valid_number(text): syntax_error(line) return int(text) def valid_number(text): if len(text) < 2: return False if text[0] != ' ': return False text = text[1:] if '+' in text and '-' in text: return False if '+' in text: if text.count('+') != 1: return False if text[0] != '+': return False text = text[1:] if not text: return False if '-' in text: if text.count('-') != 1: return False if text[0] != '-': return False text = text[1:] if not text: return False valid_set = frozenset(string.digits) value_set = frozenset(text) if len(value_set - valid_set) != 0: return False return True ################################################################################ ################################################################################ from Interpreter import partition_number VMC_2_TRI = { (INS.PUSH, True): (0, 0), (INS.COPY, False): (0, 2, 0), (INS.COPY, True): (0, 1, 0), (INS.SWAP, False): (0, 2, 1), (INS.AWAY, False): (0, 2, 2), (INS.AWAY, True): (0, 1, 2), (INS.ADD, False): (1, 0, 0, 0), (INS.SUB, False): (1, 0, 0, 1), (INS.MUL, False): (1, 0, 0, 2), (INS.DIV, False): (1, 0, 1, 0), (INS.MOD, False): (1, 0, 1, 1), (INS.SET, False): (1, 1, 0), (INS.GET, False): (1, 1, 1), (INS.PART, True): (2, 0, 0), (INS.CALL, True): (2, 0, 1), (INS.GOTO, True): (2, 0, 2), (INS.ZERO, True): (2, 1, 0), (INS.LESS, True): (2, 1, 1), (INS.BACK, False): (2, 1, 2), (INS.EXIT, False): (2, 2, 2), (INS.OCHR, False): (1, 2, 0, 0), (INS.OINT, False): (1, 2, 0, 1), (INS.ICHR, False): (1, 2, 1, 0), (INS.IINT, False): (1, 2, 1, 1) } ################################################################################ def to_trinary(program): trinary_code = [] for ins in program: if isinstance(ins, tuple): ins, arg = ins trinary_code.extend(VMC_2_TRI[(ins, True)]) trinary_code.extend(from_number(arg)) else: trinary_code.extend(VMC_2_TRI[(ins, False)]) return tuple(trinary_code) def from_number(arg): code = [int(arg < 0)] if arg: for bit in reversed(list(partition_number(abs(arg), 2))): code.append(bit) return code + [2] return code + [0, 2] to_ws = lambda trinary: ''.join(' \t\n'[index] for index in trinary) def compile_wsa(source): program = parse(source) trinary = to_trinary(program) ws_code = to_ws(trinary) return ws_code ################################################################################ ################################################################################ import os import sys import time import traceback def main(): name, source, command_line, error = get_source() if not error: start = time.clock() try: ws_code = compile_wsa(source) except: print('ERROR: File could not be compiled.\n') traceback.print_exc() error = True else: path = os.path.join('Programs', name + '.ws') try: open(path, 'w').write(ws_code) except IOError as err: print(err) error = True else: div, mod = divmod((time.clock() - start) * 1000, 1) args = int(div), '{:.3}'.format(mod)[1:] print('DONE: Comipled in {}{} ms'.format(*args)) handle_close(error, command_line) def get_source(): if len(sys.argv) > 1: command_line = True name = sys.argv[1] else: command_line = False try: name = input('Source File: ') except: return None, None, False, True print() path = os.path.join('Assembly', name + '.wsa') try: return name, open(path).read(), command_line, False except IOError as err: print(err) return None, None, command_line, True def handle_close(error, command_line): if error: usage = 'Usage: {} <assembly>'.format(os.path.basename(sys.argv[0])) print('\n{}\n{}'.format('-' * len(usage), usage)) if not command_line: time.sleep(10) ################################################################################ if __name__ == '__main__': main() Whitespace Helpers #! /usr/bin/env python """Helpers.py Includes a function to encode Python strings into my WSA format. Has a "PRINT_LINE" function that can be copied to a WSA program. Contains a "PRINT" function and documentation as an explanation.""" ################################################################################ __author__ = 'Stephen "Zero" Chappell <[email protected]>' __date__ = '14 March 2010' __version__ = '$Revision: 1 $' ################################################################################ def encode_string(string, addr): print(' push', addr) print(' push', len(string)) print(' set') addr += 1 for offset, character in enumerate(string): print(' push', addr + offset) print(' push', ord(character)) print(' set') ################################################################################ # Prints a string with newline. # push addr # call "PRINT_LINE" """ part "PRINT_LINE" call "PRINT" push 10 ochr back """ ################################################################################ # def print(array): # if len(array) <= 0: # return # offset = 1 # while len(array) - offset >= 0: # ptr = array.ptr + offset # putch(array[ptr]) # offset += 1 """ part "PRINT" # Line 1-2 copy get less "__PRINT_RET_1" copy get zero "__PRINT_RET_1" # Line 3 push 1 # Line 4 part "__PRINT_LOOP" copy copy 2 get swap sub less "__PRINT_RET_2" # Line 5 copy 1 copy 1 add # Line 6 get ochr # Line 7 push 1 add goto "__PRINT_LOOP" part "__PRINT_RET_2" away part "__PRINT_RET_1" away back """ Whitespace Interpreter #! /usr/bin/env python """Interpreter.py Runs programs in "Programs" and creates *.WSO files when needed. Can be executed directly by double-click or on the command line. If run on command line, add "ASM" flag to dump program assembly.""" ################################################################################ __author__ = 'Stephen "Zero" Chappell <[email protected]>' __date__ = '14 March 2010' __version__ = '$Revision: 4 $' ################################################################################ def test_file(path): disassemble(parse(trinary(load(path))), True) ################################################################################ load = lambda ws: ''.join(c for r in open(ws) for c in r if c in ' \t\n') trinary = lambda ws: tuple(' \t\n'.index(c) for c in ws) ################################################################################ def enum(names): names = names.replace(',', ' ').split() space = dict((reversed(pair) for pair in enumerate(names)), __slots__=()) return type('enum', (object,), space)() INS = enum('''\ PUSH, COPY, SWAP, AWAY, \ ADD, SUB, MUL, DIV, MOD, \ SET, GET, \ PART, CALL, GOTO, ZERO, LESS, BACK, EXIT, \ OCHR, OINT, ICHR, IINT''') ################################################################################ def parse(code): ins = iter(code).__next__ program = [] while True: try: imp = ins() except StopIteration: return tuple(program) if imp == 0: # [Space] parse_stack(ins, program) elif imp == 1: # [Tab] imp = ins() if imp == 0: # [Tab][Space] parse_math(ins, program) elif imp == 1: # [Tab][Tab] parse_heap(ins, program) else: # [Tab][Line] parse_io(ins, program) else: # [Line] parse_flow(ins, program) def parse_number(ins): sign = ins() if sign == 2: raise StopIteration() buffer = '' code = ins() if code == 2: raise StopIteration() while code != 2: buffer += str(code) code = ins() if sign == 1: return int(buffer, 2) * -1 return int(buffer, 2) ################################################################################ def parse_stack(ins, program): code = ins() if code == 0: # [Space] number = parse_number(ins) program.append((INS.PUSH, number)) elif code == 1: # [Tab] code = ins() number = parse_number(ins) if code == 0: # [Tab][Space] program.append((INS.COPY, number)) elif code == 1: # [Tab][Tab] raise StopIteration() else: # [Tab][Line] program.append((INS.AWAY, number)) else: # [Line] code = ins() if code == 0: # [Line][Space] program.append(INS.COPY) elif code == 1: # [Line][Tab] program.append(INS.SWAP) else: # [Line][Line] program.append(INS.AWAY) def parse_math(ins, program): code = ins() if code == 0: # [Space] code = ins() if code == 0: # [Space][Space] program.append(INS.ADD) elif code == 1: # [Space][Tab] program.append(INS.SUB) else: # [Space][Line] program.append(INS.MUL) elif code == 1: # [Tab] code = ins() if code == 0: # [Tab][Space] program.append(INS.DIV) elif code == 1: # [Tab][Tab] program.append(INS.MOD) else: # [Tab][Line] raise StopIteration() else: # [Line] raise StopIteration() def parse_heap(ins, program): code = ins() if code == 0: # [Space] program.append(INS.SET) elif code == 1: # [Tab] program.append(INS.GET) else: # [Line] raise StopIteration() def parse_io(ins, program): code = ins() if code == 0: # [Space] code = ins() if code == 0: # [Space][Space] program.append(INS.OCHR) elif code == 1: # [Space][Tab] program.append(INS.OINT) else: # [Space][Line] raise StopIteration() elif code == 1: # [Tab] code = ins() if code == 0: # [Tab][Space] program.append(INS.ICHR) elif code == 1: # [Tab][Tab] program.append(INS.IINT) else: # [Tab][Line] raise StopIteration() else: # [Line] raise StopIteration() def parse_flow(ins, program): code = ins() if code == 0: # [Space] code = ins() label = parse_number(ins) if code == 0: # [Space][Space] program.append((INS.PART, label)) elif code == 1: # [Space][Tab] program.append((INS.CALL, label)) else: # [Space][Line] program.append((INS.GOTO, label)) elif code == 1: # [Tab] code = ins() if code == 0: # [Tab][Space] label = parse_number(ins) program.append((INS.ZERO, label)) elif code == 1: # [Tab][Tab] label = parse_number(ins) program.append((INS.LESS, label)) else: # [Tab][Line] program.append(INS.BACK) else: # [Line] code = ins() if code == 2: # [Line][Line] program.append(INS.EXIT) else: # [Line][Space] or [Line][Tab] raise StopIteration() ################################################################################ MNEMONIC = '\ push copy swap away add sub mul div mod set get part \ call goto zero less back exit ochr oint ichr iint'.split() HAS_ARG = [getattr(INS, name) for name in 'PUSH COPY AWAY PART CALL GOTO ZERO LESS'.split()] HAS_LABEL = [getattr(INS, name) for name in 'PART CALL GOTO ZERO LESS'.split()] def disassemble(program, names=False): if names: names = create_names(program) for ins in program: if isinstance(ins, tuple): ins, arg = ins assert ins in HAS_ARG has_arg = True else: assert INS.PUSH <= ins <= INS.IINT has_arg = False if ins == INS.PART: if names: print(MNEMONIC[ins], '"' + names[arg] + '"') else: print(MNEMONIC[ins], arg) elif has_arg and ins in HAS_ARG: if ins in HAS_LABEL and names: assert arg in names print(' ' + MNEMONIC[ins], '"' + names[arg] + '"') else: print(' ' + MNEMONIC[ins], arg) else: print(' ' + MNEMONIC[ins]) ################################################################################ def create_names(program): names = {} number = 1 for ins in program: if isinstance(ins, tuple) and ins[0] == INS.PART: label = ins[1] assert label not in names names[label] = number_to_name(number) number += 1 return names def number_to_name(number): name = '' for offset in reversed(list(partition_number(number, 27))): if offset: name += chr(ord('A') + offset - 1) else: name += '_' return name def partition_number(number, base): div, mod = divmod(number, base) yield mod while div: div, mod = divmod(div, base) yield mod ################################################################################ CODE = (' \t\n', ' \n ', ' \t \t\n', ' \n\t', ' \n\n', ' \t\n \t\n', '\t ', '\t \t', '\t \n', '\t \t ', '\t \t\t', '\t\t ', '\t\t\t', '\n \t\n', '\n \t \t\n', '\n \n \t\n', '\n\t \t\n', '\n\t\t \t\n', '\n\t\n', '\n\n\n', '\t\n ', '\t\n \t', '\t\n\t ', '\t\n\t\t') EXAMPLE = ''.join(CODE) ################################################################################ NOTES = '''\ STACK ===== push number copy copy number swap away away number MATH ==== add sub mul div mod HEAP ==== set get FLOW ==== part label call label goto label zero label less label back exit I/O === ochr oint ichr iint''' ################################################################################ ################################################################################ class Stack: def __init__(self): self.__data = [] # Stack Operators def push(self, number): self.__data.append(number) def copy(self, number=None): if number is None: self.__data.append(self.__data[-1]) else: size = len(self.__data) index = size - number - 1 assert 0 <= index < size self.__data.append(self.__data[index]) def swap(self): self.__data[-2], self.__data[-1] = self.__data[-1], self.__data[-2] def away(self, number=None): if number is None: self.__data.pop() else: size = len(self.__data) index = size - number - 1 assert 0 <= index < size del self.__data[index:-1] # Math Operators def add(self): suffix = self.__data.pop() prefix = self.__data.pop() self.__data.append(prefix + suffix) def sub(self): suffix = self.__data.pop() prefix = self.__data.pop() self.__data.append(prefix - suffix) def mul(self): suffix = self.__data.pop() prefix = self.__data.pop() self.__data.append(prefix * suffix) def div(self): suffix = self.__data.pop() prefix = self.__data.pop() self.__data.append(prefix // suffix) def mod(self): suffix = self.__data.pop() prefix = self.__data.pop() self.__data.append(prefix % suffix) # Program Operator def pop(self): return self.__data.pop() ################################################################################ class Heap: def __init__(self): self.__data = {} def set_(self, addr, item): if item: self.__data[addr] = item elif addr in self.__data: del self.__data[addr] def get_(self, addr): return self.__data.get(addr, 0) ################################################################################ import os import zlib import msvcrt import pickle import string class CleanExit(Exception): pass NOP = lambda arg: None DEBUG_WHITESPACE = False ################################################################################ class Program: NO_ARGS = INS.COPY, INS.SWAP, INS.AWAY, INS.ADD, \ INS.SUB, INS.MUL, INS.DIV, INS.MOD, \ INS.SET, INS.GET, INS.BACK, INS.EXIT, \ INS.OCHR, INS.OINT, INS.ICHR, INS.IINT HAS_ARG = INS.PUSH, INS.COPY, INS.AWAY, INS.PART, \ INS.CALL, INS.GOTO, INS.ZERO, INS.LESS def __init__(self, code): self.__data = code self.__validate() self.__build_jump() self.__check_jump() self.__setup_exec() def __setup_exec(self): self.__iptr = 0 self.__stck = stack = Stack() self.__heap = Heap() self.__cast = [] self.__meth = (stack.push, stack.copy, stack.swap, stack.away, stack.add, stack.sub, stack.mul, stack.div, stack.mod, self.__set, self.__get, NOP, self.__call, self.__goto, self.__zero, self.__less, self.__back, self.__exit, self.__ochr, self.__oint, self.__ichr, self.__iint) def step(self): ins = self.__data[self.__iptr] self.__iptr += 1 if isinstance(ins, tuple): self.__meth[ins[0]](ins[1]) else: self.__meth[ins]() def run(self): while True: ins = self.__data[self.__iptr] self.__iptr += 1 if isinstance(ins, tuple): self.__meth[ins[0]](ins[1]) else: self.__meth[ins]() def __oint(self): for digit in str(self.__stck.pop()): msvcrt.putwch(digit) def __ichr(self): addr = self.__stck.pop() # Input Routine while msvcrt.kbhit(): msvcrt.getwch() while True: char = msvcrt.getwch() if char in '\x00\xE0': msvcrt.getwch() elif char in string.printable: char = char.replace('\r', '\n') msvcrt.putwch(char) break item = ord(char) # Storing Number self.__heap.set_(addr, item) def __iint(self): addr = self.__stck.pop() # Input Routine while msvcrt.kbhit(): msvcrt.getwch() buff = '' char = msvcrt.getwch() while char != '\r' or not buff: if char in '\x00\xE0': msvcrt.getwch() elif char in '+-' and not buff: msvcrt.putwch(char) buff += char elif '0' <= char <= '9': msvcrt.putwch(char) buff += char elif char == '\b': if buff: buff = buff[:-1] msvcrt.putwch(char) msvcrt.putwch(' ') msvcrt.putwch(char) char = msvcrt.getwch() msvcrt.putwch(char) msvcrt.putwch('\n') item = int(buff) # Storing Number self.__heap.set_(addr, item) def __goto(self, label): self.__iptr = self.__jump[label] def __zero(self, label): if self.__stck.pop() == 0: self.__iptr = self.__jump[label] def __less(self, label): if self.__stck.pop() < 0: self.__iptr = self.__jump[label] def __exit(self): self.__setup_exec() raise CleanExit() def __set(self): item = self.__stck.pop() addr = self.__stck.po

    Read the article

  • Choosing a scripting language for game and implementing it

    - by Radius
    Hello, I am currently developing a 3D Action/RPG game in C++, and I would like some advice in choosing a scripting language to program the AI of the game. My team comes from a modding background, and in fact we are still finishing work on a mod of the game Gothic. In that game (which we also got our inspiration from) the language DAEDALUS (created by Piranha Bytes, the makers of the game) is used. Here is a full description of said language. The main thing to notice about this is that it uses instances moreso than classes. The game engine is closed, and so one can only guess about the internal implementation of this language, but the main thing I am looking for in a scripting language (which ideally would be quite similar but preferably also more powerful than DAEDALUS) is the fact that there are de facto 3 'separations' of classes - ie classes, instances and (instances of instances?). I think it will be easier to understand what I want if I provide an example. Take a regular NPC. First of all you have a class defined which (I understand) mirrors the (class or structure) inside the engine: CLASS C_NPC { VAR INT id ; // absolute ID des NPCs VAR STRING name [5] ; // Namen des NPC VAR STRING slot ; VAR INT npcType ; VAR INT flags ; VAR INT attribute [ATR_INDEX_MAX] ; VAR INT protection [PROT_INDEX_MAX]; VAR INT damage [DAM_INDEX_MAX] ; VAR INT damagetype ; VAR INT guild,level ; VAR FUNC mission [MAX_MISSIONS] ; var INT fight_tactic ; VAR INT weapon ; VAR INT voice ; VAR INT voicePitch ; VAR INT bodymass ; VAR FUNC daily_routine ; // Tagesablauf VAR FUNC start_aistate ; // Zustandsgesteuert // ********************** // Spawn // ********************** VAR STRING spawnPoint ; // Beim Tod, wo respawnen ? VAR INT spawnDelay ; // Mit Delay in (Echtzeit)-Sekunden // ********************** // SENSES // ********************** VAR INT senses ; // Sinne VAR INT senses_range ; // Reichweite der Sinne in cm // ********************** // Feel free to use // ********************** VAR INT aivar [50] ; VAR STRING wp ; // ********************** // Experience dependant // ********************** VAR INT exp ; // EXerience Points VAR INT exp_next ; // EXerience Points needed to advance to next level VAR INT lp ; // Learn Points }; Then, you can also define prototypes (which set some default values). But how you actually define an NPC is like this: instance BAU_900_Ricelord (Npc_Default) //Inherit from prototype Npc_Default { //-------- primary data -------- name = "Ryzowy Ksiaze"; npctype = NPCTYPE_GUARD; guild = GIL_BAU; level = 10; voice = 12; id = 900; //-------- abilities -------- attribute[ATR_STRENGTH] = 50; attribute[ATR_DEXTERITY] = 10; attribute[ATR_MANA_MAX] = 0; attribute[ATR_MANA] = 0; attribute[ATR_HITPOINTS_MAX]= 170; attribute[ATR_HITPOINTS] = 170; //-------- visuals -------- // animations Mdl_SetVisual (self,"HUMANS.MDS"); Mdl_ApplyOverlayMds (self,"Humans_Arrogance.mds"); Mdl_ApplyOverlayMds (self,"HUMANS_DZIDA.MDS"); // body mesh ,bdytex,skin,head mesh ,headtex,teethtex,ruestung Mdl_SetVisualBody (self,"Hum_Body_CookSmith",1,1,"Hum_Head_FatBald",91 , 0,-1); B_Scale (self); Mdl_SetModelFatness(self,2); fight_tactic = FAI_HUMAN_STRONG; //-------- Talente -------- Npc_SetTalentSkill (self,NPC_TALENT_1H,1); //-------- inventory -------- CreateInvItems (self, ItFoRice,10); CreateInvItem (self, ItFoWine); CreateInvItems(self, ItMiNugget,40); EquipItem (self, Heerscherstab); EquipItem (self, MOD_AMULETTDESREISLORDS); CreateInvItem (self, ItMi_Alchemy_Moleratlubric_01); //CreateInvItem (self,ItKey_RB_01); EquipItem (self, Ring_des_Lebens); //-------------Daily Routine------------- daily_routine = Rtn_start_900; }; FUNC VOID Rtn_start_900 () { TA_Boss (07,00,20,00,"NC_RICELORD"); TA_SitAround (20,00,24,00,"NC_RICELORD_SIT"); TA_Sleep (24,00,07,00,"NC_RICEBUNKER_10"); }; As you can see, the instance declaration is more like a constructor function, setting values and calling functions from within. This still wouldn't pose THAT much of a problem, if not for one more thing: multiple copies of this instance. For example, you can spawn multiple BAU_900_Ricelord's, and each of them keeps track of its own AI state, hitpoints etc. Now I think the instances are represented as ints (maybe even as the id of the NPC) inside the engine, as whenever (inside the script) you use the expression BAU_900_Ricelord it can be only assigned to an int variable, and most functions that operate on NPCs take that int value. However to directly modify its hitpoints etc you have to do something like var C_NPC npc = GetNPC(Bau_900_Ricelord); npc.attribute[ATR_HITPOINTS] = 10; ie get the actual C_NPC object that represents it. To finally recap - is it possible to get this kind of behaviour in any scripting languages you know of, or am I stuck with having to make my own? Or maybe there is an even better way of representing NPC's and their behaviours that way. The IDEAL language for scripting for me would be C#, as I simply adore that language, but somehow I doubt it is possible or indeed feasible to try and implement a similar kind of behaviour in C#. Many thanks

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • problem in displays data in one page

    - by user318068
    hi ,,,,, I have a problem in the following code ... The following code works as follows displays the invites for each member so that if he had five invite from supposed to be displayed all on one page But before you code that does not function Proper image is the only display one invite on the page and until the approval or rejection of the invitation displays the invite the other .... But this is not my want to offer all on one page I wish I could solve the problem and I can view all calls in one page I think that the problem is in the order code I think that the problem is in the order code my code : <?php session_start(); if (!isset($_SESSION['user_id'])) { header("Location: login.php"); } $id=$_SESSION['user_id']; ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <body> <center> <?php include("connect.php"); $sql =mysql_query("select * from ninvite where recieverMemberID ='$id' and viwed= '0'"); $num =mysql_num_rows($sql); echo $num ; if ($num>0) { while($row=mysql_fetch_array($sql)) { $sender=$row['SenderMemberID']; $room=$row['RoomID']; $sql =mysql_query("select MemberName from members where MemberID ='$sender' "); $sql1 =mysql_query("select RoomName from rooms where RoomID ='$room' "); while($row=mysql_fetch_array($sql)) {$mem =$row['MemberName']; } while($rows=mysql_fetch_array($sql1)) { $Ro =$rows['RoomName']; ?> <form action="join.php" method="post"> <label> </label> <br/> <label> <?php echo " you have invite from $mem to join $Ro"; ?> </label> <br/><br/> <label>accept</label> <input name="radio1" type="radio" value="accpet" /> <label>reject</label> <input name="radio1" type="radio" value="Reject" /><br/> <input type="submit" name="submit" value="done" /> </form> <?php } } } ?> </center> </body> </html> thanks alot. my SQl -- phpMyAdmin SQL Dump -- version 3.2.4 -- http://www.phpmyadmin.net -- Host: localhost -- Generation Time: May 07, 2010 at 12:50 ? -- Server version: 5.1.41 -- PHP Version: 5.3.1 SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO"; /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT /; /!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS /; /!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION /; /!40101 SET NAMES utf8 */; -- -- Database: tr -- -- Table structure for table joinroom CREATE TABLE IF NOT EXISTS joinroom ( MemberID int(10) NOT NULL, RoomID int(10) NOT NULL, PRIMARY KEY (MemberID,RoomID) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; -- -- Dumping data for table joinroom INSERT INTO joinroom (MemberID, RoomID) VALUES (28, 1); -- -- Table structure for table members CREATE TABLE IF NOT EXISTS members ( MemberID int(10) unsigned NOT NULL AUTO_INCREMENT, MemberName varchar(20) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, MemberPass varchar(10) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, MemberEmail varchar(30) CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, MemberLocation text CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, MemberImg text CHARACTER SET utf8 COLLATE utf8_bin NOT NULL, PRIMARY KEY (MemberID) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=34 ; -- -- Dumping data for table members INSERT INTO members (MemberID, MemberName, MemberPass, MemberEmail, MemberLocation, MemberImg) VALUES (28, 'marwa', '1234', '[email protected]', 'mmmmmm', 'dddddddddd'), (29, 'nora', '1234', '[email protected]', 'fffffffffffgg', 'gggggggggggggg'), (30, 'soso', '1234', '[email protected]', 'ffffffff', 'kkkkkkkkkkkkkkkkkk'), (31, 'gege', '1234', '[email protected]', 'kkkkkkkkkkkkkkkk', 'uuuuuuuuuuuuuuuuu'), (32, 'nono', '1234', '[email protected]', 'ggggggggggggaaaaa', 'aaaaaaaaaaaaaaa'), (33, 'nda', '1234', '[email protected]', 'kkkkkkkkkkkkkkkk', 'ooooooooooooooo'); -- -- Table structure for table ninvite CREATE TABLE IF NOT EXISTS ninvite ( SenderMemberID int(11) NOT NULL AUTO_INCREMENT, recieverMemberID varchar(30) NOT NULL, RoomID int(11) NOT NULL, viwed int(11) NOT NULL, PRIMARY KEY (SenderMemberID,recieverMemberID,RoomID) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=33 ; -- -- Dumping data for table ninvite INSERT INTO ninvite (SenderMemberID, recieverMemberID, RoomID, viwed) VALUES (28, '33', 1, 0), (28, '32', 1, 0), (28, '31', 1, 0); /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT /; /!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS /; /!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;

    Read the article

  • MonoTouch App Crashes When Returning From MFMailComposeViewController

    - by Richard Khan
    My MonoTouch Version Info: Release ID: 20401003 Git revision: 2f1746af36f421d262dcd2b0542ce86b12158f02 Build date: 2010-12-23 23:13:38+0000 The MFMailComposeViewController is displayed and works correctly as a dialog using the following code: if (MFMailComposeViewController.CanSendMail) { MFMailComposeViewController mail; mail = new MFMailComposeViewController (); mail.SetSubject ("Subject Test"); mail.SetMessageBody ("Body Test", false); mail.Finished += HandleMailFinished; this.navigationController.PresentModalViewController (mail, true); } else { new UIAlertView ("Mail Failed", "Mail Failed", null, "OK", null).Show (); } However, once the user selects Cancel | Delete Draft or Cancel | Save Draft or Send, the App throws a run-time error like the following: Stacktrace: at (wrapper managed-to-native) MonoTouch.UIKit.UIApplication.UIApplicationMain (int,string[],intptr,intptr) <0x00004 at (wrapper managed-to-native) MonoTouch.UIKit.UIApplication.UIApplicationMain (int,string[],intptr,intptr) <0x00004 at MonoTouch.UIKit.UIApplication.Main (string[],string,string) [0x00038] in /Users/plasma/Source/iphone/monotouch/UIKit/UIApplication.cs:26 at MonoTouch.UIKit.UIApplication.Main (string[]) [0x00000] in /Users/plasma/Source/iphone/monotouch/UIKit/UIApplication.cs:31 at MailDialog.Application.Main (string[]) [0x00000] in /Users/rrkhan/Projects/Sandbox/MailDialog/Main.cs:15 at (wrapper runtime-invoke) .runtime_invoke_void_object (object,intptr,intptr,intptr) Native stacktrace: 0 MailDialog 0x000be66f mono_handle_native_sigsegv + 343 1 MailDialog 0x0000e43e mono_sigsegv_signal_handler + 313 2 libSystem.B.dylib 0x903e946b _sigtramp + 43 3 ??? 0xffffffff 0x0 + 4294967295 4 MessageUI 0x01a9f6b7 -[MFMailComposeController _close] + 284 5 UIKit 0x01f682f1 -[UIActionSheet(Private) _buttonClicked:] + 258 6 UIKit 0x01be1a6e -[UIApplication sendAction:to:from:forEvent:] + 119 7 UIKit 0x01c701b5 -[UIControl sendAction:to:forEvent:] + 67 8 UIKit 0x01c72647 -[UIControl(Internal) _sendActionsForEvents:withEvent:] + 527 9 UIKit 0x01c711f4 -[UIControl touchesEnded:withEvent:] + 458 10 UIKit 0x01c060d1 -[UIWindow _sendTouchesForEvent:] + 567 11 UIKit 0x01be737a -[UIApplication sendEvent:] + 447 12 UIKit 0x01bec732 _UIApplicationHandleEvent + 7576 13 GraphicsServices 0x03eb7a36 PurpleEventCallback + 1550 14 CoreFoundation 0x00df9064 CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION + 52 15 CoreFoundation 0x00d596f7 __CFRunLoopDoSource1 + 215 16 CoreFoundation 0x00d56983 __CFRunLoopRun + 979 17 CoreFoundation 0x00d56240 CFRunLoopRunSpecific + 208 18 CoreFoundation 0x00d56161 CFRunLoopRunInMode + 97 19 GraphicsServices 0x03eb6268 GSEventRunModal + 217 20 GraphicsServices 0x03eb632d GSEventRun + 115 21 UIKit 0x01bf042e UIApplicationMain + 1160 22 ??? 0x0a1e4bd9 0x0 + 169757657 23 ??? 0x0a1e4b12 0x0 + 169757458 24 ??? 0x0a1e4515 0x0 + 169755925 25 ??? 0x0a1e4451 0x0 + 169755729 26 ??? 0x0a1e44ac 0x0 + 169755820 27 MailDialog 0x0000e202 mono_jit_runtime_invoke + 1360 28 MailDialog 0x001c92af mono_runtime_invoke + 137 29 MailDialog 0x001caf6b mono_runtime_exec_main + 714 30 MailDialog 0x001ca891 mono_runtime_run_main + 812 31 MailDialog 0x00094fe8 mono_jit_exec + 200 32 MailDialog 0x0027cf05 main + 3494 33 MailDialog 0x00002ca1 _start + 208 34 MailDialog 0x00002bd0 start + 40 Debug info from gdb: warning: Could not find object file "/var/folders/Ny/NyElTwhDGD8kZMqIEeLGXE+++TI/-Tmp-//cc6F1tBs.o" - no debug information available for "template.m". warning: .o file "/Developer/MonoTouch/SDKs/MonoTouch.iphonesimulator4.2.sdk/usr/lib/libmonotouch.a(zlib-helper.x86.42.o)" more recent than executable timestamp in "/Users/rrkhan/Library/Application Support/iPhone Simulator/4.2/Applications/52AF1D24-AADA-48ED-B373-ED08E89E4985/MailDialog.app/MailDialog" warning: Could not open OSO file /Developer/MonoTouch/SDKs/MonoTouch.iphonesimulator4.2.sdk/usr/lib/libmonotouch.a(zlib-helper.x86.42.o) to scan for pubtypes for objfile /Users/rrkhan/Library/Application Support/iPhone Simulator/4.2/Applications/52AF1D24-AADA-48ED-B373-ED08E89E4985/MailDialog.app/MailDialog warning: .o file "/Developer/MonoTouch/SDKs/MonoTouch.iphonesimulator4.2.sdk/usr/lib/libmonotouch.a(monotouch-glue.x86.42.o)" more recent than executable timestamp in "/Users/rrkhan/Library/Application Support/iPhone Simulator/4.2/Applications/52AF1D24-AADA-48ED-B373-ED08E89E4985/MailDialog.app/MailDialog" warning: Could not open OSO file /Developer/MonoTouch/SDKs/MonoTouch.iphonesimulator4.2.sdk/usr/lib/libmonotouch.a(monotouch-glue.x86.42.o) to scan for pubtypes for objfile /Users/rrkhan/Library/Application Support/iPhone Simulator/4.2/Applications/52AF1D24-AADA-48ED-B373-ED08E89E4985/MailDialog.app/MailDialog warning: .o file "/Developer/MonoTouch/SDKs/MonoTouch.iphonesimulator4.2.sdk/usr/lib/libmonotouch.a(gc.x86.42.o)" more recent than executable timestamp in "/Users/rrkhan/Library/Application Support/iPhone Simulator/4.2/Applications/52AF1D24-AADA-48ED-B373-ED08E89E4985/MailDialog.app/MailDialog" warning: Could not open OSO file /Developer/MonoTouch/SDKs/MonoTouch.iphonesimulator4.2.sdk/usr/lib/libmonotouch.a(gc.x86.42.o) to scan for pubtypes for objfile /Users/rrkhan/Library/Application Support/iPhone Simulator/4.2/Applications/52AF1D24-AADA-48ED-B373-ED08E89E4985/MailDialog.app/MailDialog Error connecting stdout and stderr (127.0.0.1:10001) warning: .o file "/Developer/MonoTouch/SDKs/MonoTouch.iphonesimulator4.2.sdk/usr/lib/libmonotouch.a(monotouch-glue.x86.42.o)" more recent than executable timestamp in "/Users/rrkhan/Library/Application Support/iPhone Simulator/4.2/Applications/52AF1D24-AADA-48ED-B373-ED08E89E4985/MailDialog.app/MailDialog" warning: Couldn't open object file '/Developer/MonoTouch/SDKs/MonoTouch.iphonesimulator4.2.sdk/usr/lib/libmonotouch.a(monotouch-glue.x86.42.o)' Attaching to process 9992. Reading symbols for shared libraries . done Reading symbols for shared libraries ....................................................................................................................... done 0x9038e459 in read$UNIX2003 () 8 0x903a8a12 in __workq_kernreturn () 7 "WebThread" 0x903830fa in mach_msg_trap () 6 0x903b10a6 in __semwait_signal () 5 0x90383136 in semaphore_wait_trap () 4 0x903830fa in mach_msg_trap () 3 0x903a8a12 in __workq_kernreturn () 2 "com.apple.libdispatch-manager" 0x903a9982 in kevent () * 1 "com.apple.main-thread" 0x9038e459 in read$UNIX2003 () Thread 8 (process 9992): 0 0x903a8a12 in __workq_kernreturn () 1 0x903a8fa8 in _pthread_wqthread () 2 0x903a8bc6 in start_wqthread () Thread 7 (process 9992): 0 0x903830fa in mach_msg_trap () 1 0x90383867 in mach_msg () 2 0x00df94a6 in __CFRunLoopServiceMachPort () 3 0x00d56874 in __CFRunLoopRun () 4 0x00d56240 in CFRunLoopRunSpecific () 5 0x00d56161 in CFRunLoopRunInMode () 6 0x04f7c423 in RunWebThread () 7 0x903b085d in _pthread_start () 8 0x903b06e2 in thread_start () Thread 6 (process 9992): 0 0x903b10a6 in __semwait_signal () 1 0x903dcee5 in nanosleep$UNIX2003 () 2 0x903dce23 in usleep$UNIX2003 () 3 0x0027714c in monotouch_pump_gc () 4 0x903b085d in _pthread_start () 5 0x903b06e2 in thread_start () Thread 5 (process 9992): 0 0x90383136 in semaphore_wait_trap () 1 0x0015ae1d in finalizer_thread (unused=0x0) at ../../../../mono/metadata/gc.c:1026 2 0x002034a3 in start_wrapper (data=0x7b16ba0) at ../../../../mono/metadata/threads.c:661 3 0x002448e2 in thread_start_routine (args=0x8037e34) at ../../../../mono/io-layer/wthreads.c:286 4 0x00274357 in GC_start_routine (arg=0x6ff7f60) at ../../../libgc/pthread_support.c:1390 5 0x903b085d in _pthread_start () 6 0x903b06e2 in thread_start () Thread 4 (process 9992): 0 0x903830fa in mach_msg_trap () 1 0x90383867 in mach_msg () 2 0x0011cc46 in mach_exception_thread (arg=0x0) at ../../../../mono/mini/mini-darwin.c:138 3 0x903b085d in _pthread_start () 4 0x903b06e2 in thread_start () Thread 3 (process 9992): 0 0x903a8a12 in __workq_kernreturn () 1 0x903a8fa8 in _pthread_wqthread () 2 0x903a8bc6 in start_wqthread () Thread 2 (process 9992): 0 0x903a9982 in kevent () 1 0x903aa09c in _dispatch_mgr_invoke () 2 0x903a9559 in _dispatch_queue_invoke () 3 0x903a92fe in _dispatch_worker_thread2 () 4 0x903a8d81 in _pthread_wqthread () 5 0x903a8bc6 in start_wqthread () Thread 1 (process 9992): 0 0x9038e459 in read$UNIX2003 () 1 0x000be81f in mono_handle_native_sigsegv (signal=11, ctx=0xbfffd238) at ../../../../mono/mini/mini-exceptions.c:1826 2 0x0000e43e in mono_sigsegv_signal_handler (_dummy=10, info=0xbfffd1f8, context=0xbfffd238) at ../../../../mono/mini/mini.c:4846 3 4 0x028d6a63 in objc_msgSend () 5 0x01ad469f in func.24012 () 6 0x01a9f6b7 in -[MFMailComposeController _close] () 7 0x01f682f1 in -[UIActionSheet(Private) _buttonClicked:] () 8 0x01be1a6e in -[UIApplication sendAction:to:from:forEvent:] () 9 0x01c701b5 in -[UIControl sendAction:to:forEvent:] () 10 0x01c72647 in -[UIControl(Internal) _sendActionsForEvents:withEvent:] () 11 0x01c711f4 in -[UIControl touchesEnded:withEvent:] () 12 0x01c060d1 in -[UIWindow _sendTouchesForEvent:] () 13 0x01be737a in -[UIApplication sendEvent:] () 14 0x01bec732 in _UIApplicationHandleEvent () 15 0x03eb7a36 in PurpleEventCallback () 16 0x00df9064 in CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION () 17 0x00d596f7 in __CFRunLoopDoSource1 () 18 0x00d56983 in __CFRunLoopRun () 19 0x00d56240 in CFRunLoopRunSpecific () 20 0x00d56161 in CFRunLoopRunInMode () 21 0x03eb6268 in GSEventRunModal () 22 0x03eb632d in GSEventRun () 23 0x01bf042e in UIApplicationMain () 24 0x0a1e4bd9 in ?? () 25 0x0a1e4b12 in ?? () 26 0x0a1e4515 in ?? () 27 0x0a1e4451 in ?? () 28 0x0a1e44ac in ?? () 29 0x0000e202 in mono_jit_runtime_invoke (method=0xa806e6c, obj=0x0, params=0xbfffedbc, exc=0x0) at ../../../../mono/mini/mini.c:4733 30 0x001c92af in mono_runtime_invoke (method=0xa806e6c, obj=0x0, params=0xbfffedbc, exc=0x0) at ../../../../mono/metadata/object.c:2615 31 0x001caf6b in mono_runtime_exec_main (method=0xa806e6c, args=0xa6a34e0, exc=0x0) at ../../../../mono/metadata/object.c:3581 32 0x001ca891 in mono_runtime_run_main (method=0xa806e6c, argc=0, argv=0xbfffeef4, exc=0x0) at ../../../../mono/metadata/object.c:3355 33 0x00094fe8 in mono_jit_exec (domain=0x6f8fe58, assembly=0xa200730, argc=1, argv=0xbfffeef0) at ../../../../mono/mini/driver.c:1094 34 0x0027cf05 in main () ================================================================= Got a SIGSEGV while executing native code. This usually indicates a fatal error in the mono runtime or one of the native libraries used by your application. Unhandled Exception: System.NullReferenceException: Object reference not set to an instance of an object at (wrapper managed-to-native) MonoTouch.UIKit.UIApplication:UIApplicationMain (int,string[],intptr,intptr) at MonoTouch.UIKit.UIApplication.Main (System.String[] args, System.String principalClassName, System.String delegateClassName) [0x00038] in /Users/plasma/Source/iphone/monotouch/UIKit/UIApplication.cs:26 at MonoTouch.UIKit.UIApplication.Main (System.String[] args) [0x00000] in /Users/plasma/Source/iphone/monotouch/UIKit/UIApplication.cs:31 at MailDialog.Application.Main (System.String[] args) [0x00000] in /Users/rrkhan/Projects/Sandbox/MailDialog/Main.cs:15 I have a very simple sample project illustrating the problem. I can sent you if required.

    Read the article

  • error in coding a lexer in c

    - by mekasperasky
    #include<stdio.h> #include<ctype.h> #include<string.h> /* this is a lexer which recognizes constants , variables ,symbols, identifiers , functions , comments and also header files . It stores the lexemes in 3 different files . One file contains all the headers and the comments . Another file will contain all the variables , another will contain all the symbols. */ int main() { int i=0,j,k,count=0; char a,b[100],c[10000],d[100]; memset ( d, 0, 100 ); j=30; FILE *fp1,*fp2; fp1=fopen("source.txt","r"); //the source file is opened in read only mode which will passed through the lexer fp2=fopen("lext.txt","w"); //now lets remove all the white spaces and store the rest of the words in a file if(fp1==NULL) { perror("failed to open source.txt"); //return EXIT_FAILURE; } i=0; k=0; while(!feof(fp1)) { a=fgetc(fp1); if(a!=' '&&a!='\n') { if (!isalpha(a)) { switch(a) { case '+':{fprintf(fp2,"+ ----> PLUS \n"); i=0;break;} case '-':{fprintf(fp2,"- ---> MINUS \n"); i=0;break;} case '*':{fprintf(fp2, "* --->MULT \n"); i=0;break;} case '/':{fprintf(fp2, "/ --->DIV \n"); i=0;break;} //case '+=':fprintf(fp2, "%.20s\n", "ADD_ASSIGN"); //case '-=':fprintf(fp2, "%.20s\n", "SUB_ASSIGN"); case '=':{fprintf(fp2, "= ---> ASSIGN \n"); i=0;break;} case '%':{fprintf(fp2, "% ---> MOD \n"); i=0;break;} case '<':{fprintf(fp2, "< ---> LESSER_THAN \n"); i=0;break;} case '>':{fprintf(fp2, "> --> GREATER_THAN \n"); i=0;break;} //case '++':fprintf(fp2, "%.20s\n", "INCREMENT"); //case '--':fprintf(fp2, "%.20s\n", "DECREMENT"); //case '==':fprintf(fp2, "%.20s\n", "ASSIGNMENT"); case ';':{fprintf(fp2, "; --->SEMI_COLUMN \n"); i=0;break;} case ':':{fprintf(fp2, ": --->COLUMN \n"); i=0;break;} case '(':{fprintf(fp2, "( --->LPAR \n"); i=0;break;} case ')':{fprintf(fp2, ") --->RPAR \n"); i=0;break;} case '{':{fprintf(fp2, "{ --->LBRACE \n"); i=0;break;} case '}':{fprintf(fp2, "} ---> RBRACE \n"); i=0;break;} } } else { d[i]=a; //printf("%c\n",d[i]); i=i+1; } //} /* we can make the lexer more complex by including even more depths of checks for the symbols*/ } else { d[i+1]='\0'; printf("\n"); if((strcmp(d,"if ")==0)){fprintf(fp2,"if ----> IDENTIFIER \n"); //printf("%s \n",d); memset ( d, 0, 100 ); //printf("%s \n",d); count=count+1;} else if(strcmp(d,"then")==0){fprintf(fp2,"then ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"else")==0){fprintf(fp2,"else ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"switch")==0){fprintf(fp2,"switch ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"printf")==0){fprintf(fp2,"prtintf ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"scanf")==0){fprintf(fp2,"scanf ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"NULL")==0){fprintf(fp2,"NULL ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"int")==0){fprintf(fp2,"INT ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"char")==0){fprintf(fp2,"char ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"float")==0){fprintf(fp2,"float ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"long")==0){fprintf(fp2,"long ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"double")==0){fprintf(fp2,"double ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"const")==0){fprintf(fp2,"const ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"continue")==0)fprintf(fp2,"continue ----> IDENTIFIER \n"); else if(strcmp(d,"size of")==0){fprintf(fp2,"size of ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"register")==0){fprintf(fp2,"register ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"short")==0){fprintf(fp2,"short ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"auto")==0){fprintf(fp2,"auto ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"while")==0){fprintf(fp2,"while ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"do")==0){fprintf(fp2,"do ----> IDENTIFIER \n"); count=count+1;} else if(strcmp(d,"case")==0){fprintf(fp2,"case ----> IDENTIFIER \n"); count=count+1;} else if (isdigit(d[i])) { fprintf(fp2,"%s ---->NUMBER",d); } else if (isalpha(a)) { fprintf(fp2,"%s ----> Variable",d); //printf("%s",d); // memset ( d, 0, 100 );} //fprintf(fp2, "s\n", b); i=0; k=k+1; continue; } i=i+1; k=k+1; } fclose(fp1); fclose(fp2); printf("%d",count); return 0; } In this code , my source.txt has if (a+b) stored . But only ( , + and ) is getting written into lext.txt and not the identifier if or the variable a and b . Any particular reason why?

    Read the article

  • makefile pathing issues on OSX

    - by Justin808
    OK, I thought I would try one last update and see if it gets me anywhere. I've created a very small test case. This should not build anything, it just tests the path settings. Also I've setup the path so there are no spaces. The is the smallest, simplest test case I could come up with. This makefile will set the path, echo the path, run avr-gcc -v with the full path specified and then try to run it without the full path specified. It should find avr-gcc in the path on the second try, but does not. makefile TOOLCHAIN := /Users/justinzaun/Desktop/AVRBuilder.app/Contents/Resources/avrchain PATH := ${TOOLCHAIN}/bin:${PATH} export PATH all: @echo ${PATH} @echo -------- "${TOOLCHAIN}/bin/avr-gcc" -v @echo -------- avr-gcc -v output JUSTINs-MacBook-Air:Untitled justinzaun$ make /Users/justinzaun/Desktop/AVRBuilder.app/Contents/Resources/avrchain/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin -------- "/Users/justinzaun/Desktop/AVRBuilder.app/Contents/Resources/avrchain/bin/avr-gcc" -v Using built-in specs. COLLECT_GCC=/Users/justinzaun/Desktop/AVRBuilder.app/Contents/Resources/avrchain/bin/avr-gcc COLLECT_LTO_WRAPPER=/Users/justinzaun/Desktop/AVRBuilder.app/Contents/Resources/avrchain/bin/../libexec/gcc/avr/4.6.3/lto-wrapper Target: avr Configured with: /Users/justinzaun/Development/AVRBuilder/Packages/gccobj/../gcc/configure --prefix=/Users/justinzaun/Development/AVRBuilder/Packages/gccobj/../build/ --exec-prefix=/Users/justinzaun/Development/AVRBuilder/Packages/gccobj/../build/ --datadir=/Users/justinzaun/Development/AVRBuilder/Packages/gccobj/../build/ --target=avr --enable-languages=c,objc,c++ --disable-libssp --disable-lto --disable-nls --disable-libgomp --disable-gdbtk --disable-threads --enable-poison-system-directories Thread model: single gcc version 4.6.3 (GCC) -------- avr-gcc -v make: avr-gcc: No such file or directory make: *** [all] Error 1 JUSTINs-MacBook-Air:Untitled justinzaun$ Original Question I'm trying to set the path from within the makefile. I can't seem to do this on OSX. Setting the path with PATH := /new/bin/:$(PATH) does not work. See my makefile below. makefile PROJECTNAME = Untitled # Name of target controller # (e.g. 'at90s8515', see the available avr-gcc mmcu # options for possible values) MCU = atmega640 # id to use with programmer # default: PROGRAMMER_MCU=$(MCU) # In case the programer used, e.g avrdude, doesn't # accept the same MCU name as avr-gcc (for example # for ATmega8s, avr-gcc expects 'atmega8' and # avrdude requires 'm8') PROGRAMMER_MCU = $(MCU) # Source files # List C/C++/Assembly source files: # (list all files to compile, e.g. 'a.c b.cpp as.S'): # Use .cc, .cpp or .C suffix for C++ files, use .S # (NOT .s !!!) for assembly source code files. PRJSRC = main.c \ utils.c # additional includes (e.g. -I/path/to/mydir) INC = # libraries to link in (e.g. -lmylib) LIBS = # Optimization level, # use s (size opt), 1, 2, 3 or 0 (off) OPTLEVEL = s ### You should not have to touch anything below this line ### PATH := /Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR\ Builder.app/Contents/Resources/avrchain/bin:/usr/bin:/bin:$(PATH) CPATH := /Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR\ Builder.app/Contents/Resources/avrchain/include # HEXFORMAT -- format for .hex file output HEXFORMAT = ihex # compiler CFLAGS = -I. $(INC) -g -mmcu=$(MCU) -O$(OPTLEVEL) \ -fpack-struct -fshort-enums \ -funsigned-bitfields -funsigned-char \ -Wall -Wstrict-prototypes \ -Wa,-ahlms=$(firstword \ $(filter %.lst, $(<:.c=.lst))) # c++ specific flags CPPFLAGS = -fno-exceptions \ -Wa,-ahlms=$(firstword \ $(filter %.lst, $(<:.cpp=.lst)) \ $(filter %.lst, $(<:.cc=.lst)) \ $(filter %.lst, $(<:.C=.lst))) # assembler ASMFLAGS = -I. $(INC) -mmcu=$(MCU) \ -x assembler-with-cpp \ -Wa,-gstabs,-ahlms=$(firstword \ $(<:.S=.lst) $(<.s=.lst)) # linker LDFLAGS = -Wl,-Map,$(TRG).map -mmcu=$(MCU) \ -lm $(LIBS) ##### executables #### CC=avr-gcc OBJCOPY=avr-objcopy OBJDUMP=avr-objdump SIZE=avr-size AVRDUDE=avrdude REMOVE=rm -f ##### automatic target names #### TRG=$(PROJECTNAME).out DUMPTRG=$(PROJECTNAME).s HEXROMTRG=$(PROJECTNAME).hex HEXTRG=$(HEXROMTRG) $(PROJECTNAME).ee.hex # Start by splitting source files by type # C++ CPPFILES=$(filter %.cpp, $(PRJSRC)) CCFILES=$(filter %.cc, $(PRJSRC)) BIGCFILES=$(filter %.C, $(PRJSRC)) # C CFILES=$(filter %.c, $(PRJSRC)) # Assembly ASMFILES=$(filter %.S, $(PRJSRC)) # List all object files we need to create OBJDEPS=$(CFILES:.c=.o) \ $(CPPFILES:.cpp=.o) \ $(BIGCFILES:.C=.o) \ $(CCFILES:.cc=.o) \ $(ASMFILES:.S=.o) # Define all lst files. LST=$(filter %.lst, $(OBJDEPS:.o=.lst)) # All the possible generated assembly # files (.s files) GENASMFILES=$(filter %.s, $(OBJDEPS:.o=.s)) .SUFFIXES : .c .cc .cpp .C .o .out .s .S \ .hex .ee.hex .h .hh .hpp # Make targets: # all, disasm, stats, hex, writeflash/install, clean all: $(TRG) $(TRG): $(OBJDEPS) $(CC) $(LDFLAGS) -o $(TRG) $(OBJDEPS) #### Generating assembly #### # asm from C %.s: %.c $(CC) -S $(CFLAGS) $< -o $@ # asm from (hand coded) asm %.s: %.S $(CC) -S $(ASMFLAGS) $< > $@ # asm from C++ .cpp.s .cc.s .C.s : $(CC) -S $(CFLAGS) $(CPPFLAGS) $< -o $@ #### Generating object files #### # object from C .c.o: $(CC) $(CFLAGS) -c $< -o $@ # object from C++ (.cc, .cpp, .C files) .cc.o .cpp.o .C.o : $(CC) $(CFLAGS) $(CPPFLAGS) -c $< -o $@ # object from asm .S.o : $(CC) $(ASMFLAGS) -c $< -o $@ #### Generating hex files #### # hex files from elf .out.hex: $(OBJCOPY) -j .text \ -j .data \ -O $(HEXFORMAT) $< $@ .out.ee.hex: $(OBJCOPY) -j .eeprom \ --change-section-lma .eeprom=0 \ -O $(HEXFORMAT) $< $@ #### Information #### info: @echo PATH: @echo "$(PATH)" $(CC) -v which $(CC) #### Cleanup #### clean: $(REMOVE) $(TRG) $(TRG).map $(DUMPTRG) $(REMOVE) $(OBJDEPS) $(REMOVE) $(LST) $(REMOVE) $(GENASMFILES) $(REMOVE) $(HEXTRG) error JUSTINs-MacBook-Air:Untitled justinzaun$ make avr-gcc -I. -g -mmcu=atmega640 -Os -fpack-struct -fshort-enums -funsigned-bitfields -funsigned-char -Wall -Wstrict-prototypes -Wa,-ahlms=main.lst -c main.c -o main.o make: avr-gcc: No such file or directory make: *** [main.o] Error 1 JUSTINs-MacBook-Air:Untitled justinzaun$ If I change my CC= to include the full path: CC=/Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR\ Builder.app/Contents/Resources/avrchain/bin/avr-gcc then it finds it, but this doesn't seem the correct way to do things. For instance its trying to use the system as not the one in the correct path. update - Just to be sure, I'm adding the output of my ls command too so everyone knows the file exist. Also I've added a make info target to the makefile and showing that output as well. JUSTINs-MacBook-Air:Untitled justinzaun$ ls /Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR\ Builder.app/Contents/Resources/avrchain/bin ar avr-elfedit avr-man avr-strip objcopy as avr-g++ avr-nm avrdude objdump avr-addr2line avr-gcc avr-objcopy c++ ranlib avr-ar avr-gcc-4.6.3 avr-objdump g++ strip avr-as avr-gcov avr-ranlib gcc avr-c++ avr-gprof avr-readelf ld avr-c++filt avr-ld avr-size ld.bfd avr-cpp avr-ld.bfd avr-strings nm JUSTINs-MacBook-Air:Untitled justinzaun$ Output of make info with the \ in my path JUSTINs-MacBook-Air:Untitled justinzaun$ make info PATH: /Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR\ Builder.app/Contents/Resources/avrchain/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin avr-gcc -v make: avr-gcc: No such file or directory make: *** [info] Error 1 JUSTINs-MacBook-Air:Untitled justinzaun$ Output of make info with the \ not in my path JUSTINs-MacBook-Air:Untitled justinzaun$ make info PATH: /Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR Builder.app/Contents/Resources/avrchain/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin avr-gcc -v make: avr-gcc: No such file or directory make: *** [info] Error 1 JUSTINs-MacBook-Air:Untitled justinzaun$ update - When I have my CC set to include the full path as described above, this is the result of make info. JUSTINs-MacBook-Air:Untitled justinzaun$ make info PATH: /Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR Builder.app/Contents/Resources/avrchain/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin /Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR\ Builder.app/Contents/Resources/avrchain/bin/avr-gcc -v Using built-in specs. COLLECT_GCC=/Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR Builder.app/Contents/Resources/avrchain/bin/avr-gcc COLLECT_LTO_WRAPPER=/Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR Builder.app/Contents/Resources/avrchain/bin/../libexec/gcc/avr/4.6.3/lto-wrapper Target: avr Configured with: /Users/justinzaun/Development/AVRBuilder/Packages/gccobj/../gcc/configure --prefix=/Users/justinzaun/Development/AVRBuilder/Packages/gccobj/../build/ --exec-prefix=/Users/justinzaun/Development/AVRBuilder/Packages/gccobj/../build/ --datadir=/Users/justinzaun/Development/AVRBuilder/Packages/gccobj/../build/ --target=avr --enable-languages=c,objc,c++ --disable-libssp --disable-lto --disable-nls --disable-libgomp --disable-gdbtk --disable-threads --enable-poison-system-directories Thread model: single gcc version 4.6.3 (GCC) which /Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR\ Builder.app/Contents/Resources/avrchain/bin/avr-gcc /Users/justinzaun/Library/Developer/Xcode/DerivedData/AVR_Builder-gxiykwiwjywvoagykxvmotvncbyd/Build/Products/Debug/AVR Builder.app/Contents/Resources/avrchain/bin/avr-gcc JUSTINs-MacBook-Air:Untitled justinzaun$

    Read the article

  • How can I use Perl regular expressions to parse XML data?

    - by Luke
    I have a pretty long piece of XML that I want to parse. I want to remove everything except for the subclass-code and city. So that I am left with something like the example below. EXAMPLE TEST SUBCLASS|MIAMI CODE <?xml version="1.0" standalone="no"?> <web-export> <run-date>06/01/2010 <pub-code>TEST <ad-type>TEST <cat-code>Real Estate</cat-code> <class-code>TEST</class-code> <subclass-code>TEST SUBCLASS</subclass-code> <placement-description></placement-description> <position-description>Town House</position-description> <subclass3-code></subclass3-code> <subclass4-code></subclass4-code> <ad-number>0000284708-01</ad-number> <start-date>05/28/2010</start-date> <end-date>06/09/2010</end-date> <line-count>6</line-count> <run-count>13</run-count> <customer-type>Private Party</customer-type> <account-number>100099237</account-number> <account-name>DOE, JOHN</account-name> <addr-1>207 CLARENCE STREET</addr-1> <addr-2> </addr-2> <city>MIAMI</city> <state>FL</state> <postal-code>02910</postal-code> <country>USA</country> <phone-number>4014612880</phone-number> <fax-number></fax-number> <url-addr> </url-addr> <email-addr>[email protected]</email-addr> <pay-flag>N</pay-flag> <ad-description>DEANESTATES2BEDS2BATHSAPPLIANCED</ad-description> <order-source>Import</order-source> <order-status>Live</order-status> <payor-acct>100099237</payor-acct> <agency-flag>N</agency-flag> <rate-note></rate-note> <ad-content> MIAMI&#47;Dean Estates&#58; 2 beds&#44; 2 baths&#46; Applianced&#46; Central air&#46; Carpets&#46; Laundry&#46; 2 decks&#46; Pool&#46; Parking&#46; Close to everything&#46;No smoking&#46; No utilities&#46; &#36;1275 mo&#46; 401&#45;578&#45;1501&#46; </ad-content> </ad-type> </pub-code> </run-date> </web-export> PERL So what I want to do is open an existing file read the contents then use regular expressions to eliminate the unnecessary XML tags. open(READFILE, "FILENAME"); while(<READFILE>) { $_ =~ s/<\?xml version="(.*)" standalone="(.*)"\?>\n.*//g; $_ =~ s/<subclass-code>//g; $_ =~ s/<\/subclass-code>\n.*/|/g; $_ =~ s/(.*)PJ RER Houses /PJ RER Houses/g; $_ =~ s/\G //g; $_ =~ s/<city>//g; $_ =~ s/<\/city>\n.*//g; $_ =~ s/<(\/?)web-export>(.*)\n.*//g; $_ =~ s/<(\/?)run-date>(.*)\n.*//g; $_ =~ s/<(\/?)pub-code>(.*)\n.*//g; $_ =~ s/<(\/?)ad-type>(.*)\n.*//g; $_ =~ s/<(\/?)cat-code>(.*)<(\/?)cat-code>\n.*//g; $_ =~ s/<(\/?)class-code>(.*)<(\/?)class-code>\n.*//g; $_ =~ s/<(\/?)placement-description>(.*)<(\/?)placement-description>\n.*//g; $_ =~ s/<(\/?)position-description>(.*)<(\/?)position-description>\n.*//g; $_ =~ s/<(\/?)subclass3-code>(.*)<(\/?)subclass3-code>\n.*//g; $_ =~ s/<(\/?)subclass4-code>(.*)<(\/?)subclass4-code>\n.*//g; $_ =~ s/<(\/?)ad-number>(.*)<(\/?)ad-number>\n.*//g; $_ =~ s/<(\/?)start-date>(.*)<(\/?)start-date>\n.*//g; $_ =~ s/<(\/?)end-date>(.*)<(\/?)end-date>\n.*//g; $_ =~ s/<(\/?)line-count>(.*)<(\/?)line-count>\n.*//g; $_ =~ s/<(\/?)run-count>(.*)<(\/?)run-count>\n.*//g; $_ =~ s/<(\/?)customer-type>(.*)<(\/?)customer-type>\n.*//g; $_ =~ s/<(\/?)account-number>(.*)<(\/?)account-number>\n.*//g; $_ =~ s/<(\/?)account-name>(.*)<(\/?)account-name>\n.*//g; $_ =~ s/<(\/?)addr-1>(.*)<(\/?)addr-1>\n.*//g; $_ =~ s/<(\/?)addr-2>(.*)<(\/?)addr-2>\n.*//g; $_ =~ s/<(\/?)state>(.*)<(\/?)state>\n.*//g; $_ =~ s/<(\/?)postal-code>(.*)<(\/?)postal-code>\n.*//g; $_ =~ s/<(\/?)country>(.*)<(\/?)country>\n.*//g; $_ =~ s/<(\/?)phone-number>(.*)<(\/?)phone-number>\n.*//g; $_ =~ s/<(\/?)fax-number>(.*)<(\/?)fax-number>\n.*//g; $_ =~ s/<(\/?)url-addr>(.*)<(\/?)url-addr>\n.*//g; $_ =~ s/<(\/?)email-addr>(.*)<(\/?)email-addr>\n.*//g; $_ =~ s/<(\/?)pay-flag>(.*)<(\/?)pay-flag>\n.*//g; $_ =~ s/<(\/?)ad-description>(.*)<(\/?)ad-description>\n.*//g; $_ =~ s/<(\/?)order-source>(.*)<(\/?)order-source>\n.*//g; $_ =~ s/<(\/?)order-status>(.*)<(\/?)order-status>\n.*//g; $_ =~ s/<(\/?)payor-acct>(.*)<(\/?)payor-acct>\n.*//g; $_ =~ s/<(\/?)agency-flag>(.*)<(\/?)agency-flag>\n.*//g; $_ =~ s/<(\/?)rate-note>(.*)<(\/?)rate-note>\n.*//g; $_ =~ s/<ad-content>(.*)\n.*//g; $_ =~ s/\t(.*)\n.*//g; $_ =~ s/<\/ad-content>(.*)\n.*//g; } close( READFILE1 ); Is there an easier way of doing this? I don't want to use any modules. I know that it might make this easier but the file I am reading has a lot of data in it.

    Read the article

  • EKCalendar not added to iCal

    - by Alex75
    I have a strange behavior on my iPhone. I'm creating an application that uses calendar events (EventKit). The class that use is as follows: the .h one #import "GenericManager.h" #import <EventKit/EventKit.h> #define oneDay 60*60*24 #define oneHour 60*60 @protocol CalendarManagerDelegate; @interface CalendarManager : GenericManager /* * metodo che aggiunge un evento ad un calendario di nome Name nel giorno onDate. * L'evento da aggiungere viene recuperato tramite il dataSource che è quindi * OBBLIGATORIO (!= nil). * * Restituisce YES solo se il delegate è conforme al protocollo CalendarManagerDataSource. * NO altrimenti */ + (BOOL) addEventForCalendarWithName:(NSString *) name fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate withDelegate:(id<CalendarManagerDelegate>) delegate; /* * metodo che aggiunge un evento per giorno compreso tra fromDate e toDate ad un * calendario di nome Name. L'evento da aggiungere viene recuperato tramite il dataSource * che è quindi OBBLIGATORIO (!= nil). * * Restituisce YES solo se il delegate è conforme al protocollo CalendarManagerDataSource. * NO altrimenti */ + (BOOL) addEventsForCalendarWithName:(NSString *) name fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate withDelegate:(id<CalendarManagerDelegate>) delegate; @end @protocol CalendarManagerDelegate <NSObject> // viene inviato quando il calendario necessita informazioni sull' evento da aggiungere - (void) calendarManagerDidCreateEvent:(EKEvent *) event; @end the .m one // // CalendarManager.m // AppCampeggioSingolo // // Created by CreatiWeb Srl on 12/17/12. // Copyright (c) 2012 CreatiWeb Srl. All rights reserved. // #import "CalendarManager.h" #import "Commons.h" #import <objc/message.h> @interface CalendarManager () @end @implementation CalendarManager + (void)requestToEventStore:(EKEventStore *)eventStore delegate:(id)delegate fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate name:(NSString *)name { if([eventStore respondsToSelector:@selector(requestAccessToEntityType:completion:)]) { // ios >= 6.0 [eventStore requestAccessToEntityType:EKEntityTypeEvent completion:^(BOOL granted, NSError *error) { if (granted) { [self addEventForCalendarWithName:name fromDate: fromDate toDate: toDate inEventStore:eventStore withDelegate:delegate]; } else { } }]; } else if (class_getClassMethod([EKCalendar class], @selector(calendarIdentifier)) != nil) { // ios >= 5.0 && ios < 6.0 [self addEventForCalendarWithName:name fromDate:fromDate toDate:toDate inEventStore:eventStore withDelegate:delegate]; } else { // ios < 5.0 EKCalendar *myCalendar = [eventStore defaultCalendarForNewEvents]; EKEvent *event = [self generateEventForCalendar:myCalendar fromDate: fromDate toDate: toDate inEventStore:eventStore withDelegate:delegate]; [eventStore saveEvent:event span:EKSpanThisEvent error:nil]; } } /* * metodo che recupera l'identificativo del calendario associato all'app o nil se non è mai stato creato. */ + (NSString *) identifierForCalendarName: (NSString *) name { NSString * confFileName = [self pathForFile:kCurrentCalendarFileName]; NSDictionary *confCalendar = [NSDictionary dictionaryWithContentsOfFile:confFileName]; NSString *currentIdentifier = [confCalendar objectForKey:name]; return currentIdentifier; } /* * memorizza l'identifier del calendario */ + (void) saveCalendarIdentifier:(NSString *) identifier andName: (NSString *) name { if (identifier != nil) { NSString * confFileName = [self pathForFile:kCurrentCalendarFileName]; NSMutableDictionary *confCalendar = [NSMutableDictionary dictionaryWithContentsOfFile:confFileName]; if (confCalendar == nil) { confCalendar = [NSMutableDictionary dictionaryWithCapacity:1]; } [confCalendar setObject:identifier forKey:name]; [confCalendar writeToFile:confFileName atomically:YES]; } } + (EKCalendar *)getCalendarWithName:(NSString *)name inEventStore:(EKEventStore *)eventStore withLocalSource: (EKSource *)localSource forceCreation:(BOOL) force { EKCalendar *myCalendar; NSString *identifier = [self identifierForCalendarName:name]; if (force || identifier == nil) { NSLog(@"create new calendar"); if (class_getClassMethod([EKCalendar class], @selector(calendarForEntityType:eventStore:)) != nil) { // da ios 6.0 in avanti myCalendar = [EKCalendar calendarForEntityType:EKEntityTypeEvent eventStore:eventStore]; } else { myCalendar = [EKCalendar calendarWithEventStore:eventStore]; } myCalendar.title = name; myCalendar.source = localSource; NSError *error = nil; BOOL result = [eventStore saveCalendar:myCalendar commit:YES error:&error]; if (result) { NSLog(@"Saved calendar %@ to event store. %@",myCalendar,eventStore); } else { NSLog(@"Error saving calendar: %@.", error); } [self saveCalendarIdentifier:myCalendar.calendarIdentifier andName:name]; } // You can also configure properties like the calendar color etc. The important part is to store the identifier for later use. On the other hand if you already have the identifier, you can just fetch the calendar: else { myCalendar = [eventStore calendarWithIdentifier:identifier]; NSLog(@"fetch an old-one = %@",myCalendar); } return myCalendar; } + (EKCalendar *)addEventForCalendarWithName: (NSString *) name fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate inEventStore:(EKEventStore *)eventStore withDelegate: (id<CalendarManagerDelegate>) delegate { // da ios 5.0 in avanti EKCalendar *myCalendar; EKSource *localSource = nil; for (EKSource *source in eventStore.sources) { if (source.sourceType == EKSourceTypeLocal) { localSource = source; break; } } @synchronized(self) { myCalendar = [self getCalendarWithName:name inEventStore:eventStore withLocalSource:localSource forceCreation:NO]; if (myCalendar == nil) myCalendar = [self getCalendarWithName:name inEventStore:eventStore withLocalSource:localSource forceCreation:YES]; NSLog(@"End synchronized block %@",myCalendar); } EKEvent *event = [self generateEventForCalendar:myCalendar fromDate:fromDate toDate:toDate inEventStore:eventStore withDelegate:delegate]; [eventStore saveEvent:event span:EKSpanThisEvent error:nil]; return myCalendar; } + (EKEvent *) generateEventForCalendar: (EKCalendar *) calendar fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate inEventStore:(EKEventStore *) eventStore withDelegate:(id<CalendarManagerDelegate>) delegate { EKEvent *event = [EKEvent eventWithEventStore:eventStore]; event.startDate=fromDate; event.endDate=toDate; [delegate calendarManagerDidCreateEvent:event]; [event setCalendar:calendar]; // ricerca dell'evento nel calendario, se ne trovo uno uguale non lo inserisco NSPredicate *predicate = [eventStore predicateForEventsWithStartDate:fromDate endDate:toDate calendars:[NSArray arrayWithObject:calendar]]; NSArray *matchEvents = [eventStore eventsMatchingPredicate:predicate]; if ([matchEvents count] > 0) { // ne ho trovati di gia' presenti, vediamo se uno e' quello che vogliamo inserire BOOL found = NO; for (EKEvent *fetchEvent in matchEvents) { if ([fetchEvent.title isEqualToString:event.title] && [fetchEvent.notes isEqualToString:event.notes]) { found = YES; break; } } if (found) { // esiste già e quindi non lo inserisco NSLog(@"OH NOOOOOO!!"); event = nil; } } return event; } #pragma mark - Public Methods + (BOOL) addEventForCalendarWithName:(NSString *) name fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate withDelegate:(id<CalendarManagerDelegate>) delegate { BOOL retVal = YES; EKEventStore *eventStore=[[EKEventStore alloc] init]; if ([delegate conformsToProtocol:@protocol(CalendarManagerDelegate)]) { [self requestToEventStore:eventStore delegate:delegate fromDate:fromDate toDate: toDate name:name]; } else { retVal = NO; } return retVal; } + (BOOL) addEventsForCalendarWithName:(NSString *) name fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate withDelegate:(id<CalendarManagerDelegate>) delegate { BOOL retVal = YES; NSDate *dateCursor = fromDate; EKEventStore *eventStore=[[EKEventStore alloc] init]; if ([delegate conformsToProtocol:@protocol(CalendarManagerDelegate)]) { while (retVal && ([dateCursor compare:toDate] == NSOrderedAscending)) { NSDate *finish = [dateCursor dateByAddingTimeInterval:oneDay]; [self requestToEventStore:eventStore delegate:delegate fromDate: dateCursor toDate: finish name:name]; dateCursor = [dateCursor dateByAddingTimeInterval:oneDay]; } } else { retVal = NO; } return retVal; } @end In practice, on my iphone I get the log: fetch an old-one = (null) 19/12/2012 11:33:09.520 AppCampeggioSingolo [730:8 b1b] create new calendar 19/12/2012 11:33:09.558 AppCampeggioSingolo [730:8 b1b] Saved calendar EKCalendar every time I add an event, then I look and I can not find it on iCal calendar event he added. On the iPhone of a friend of mine, however, everything is working correctly. I doubt that the problem stems from the code, but just do not understand what it could be. I searched all day yesterday and part of today on google but have not found anything yet. Any help will be greatly appreciated EDIT: I forgot the call wich is [CalendarManager addEventForCalendarWithName: @"myCalendar" fromDate:fromDate toDate: toDate withDelegate:self]; in the delegate method simply set title and notes of the event like this - (void) calendarManagerDidCreateEvent:(EKEvent *) event { event.title = @"the title"; event.notes = @"some notes"; }

    Read the article

  • Azure Diagnostics wrt Custom Logs and honoring scheduledTransferPeriod

    - by kjsteuer
    I have implemented my own TraceListener similar to http://blogs.technet.com/b/meamcs/archive/2013/05/23/diagnostics-of-cloud-services-custom-trace-listener.aspx . One thing I noticed is that that logs show up immediately in My Azure Table Storage. I wonder if this is expected with Custom Trace Listeners or because I am in a development environment. My diagnosics.wadcfg <?xml version="1.0" encoding="utf-8"?> <DiagnosticMonitorConfiguration configurationChangePollInterval="PT1M""overallQuotaInMB="4096" xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration"> <DiagnosticInfrastructureLogs scheduledTransferLogLevelFilter="Information" /> <Directories scheduledTransferPeriod="PT1M"> <IISLogs container="wad-iis-logfiles" /> <CrashDumps container="wad-crash-dumps" /> </Directories> <Logs bufferQuotaInMB="0" scheduledTransferPeriod="PT30M" scheduledTransferLogLevelFilter="Information" /> </DiagnosticMonitorConfiguration> I have changed my approach a bit. Now I am defining in the web config of my webrole. I notice when I set autoflush to true in the webconfig, every thing works but scheduledTransferPeriod is not honored because the flush method pushes to the table storage. I would like to have scheduleTransferPeriod trigger the flush or trigger flush after a certain number of log entries like the buffer is full. Then I can also flush on server shutdown. Is there any method or event on the CustomTraceListener where I can listen to the scheduleTransferPeriod? <system.diagnostics> <!--http://msdn.microsoft.com/en-us/library/sk36c28t(v=vs.110).aspx By default autoflush is false. By default useGlobalLock is true. While we try to be threadsafe, we keep this default for now. Later if we would like to increase performance we can remove this. see http://msdn.microsoft.com/en-us/library/system.diagnostics.trace.usegloballock(v=vs.110).aspx --> <trace> <listeners> <add name="TableTraceListener" type="Pos.Services.Implementation.TableTraceListener, Pos.Services.Implementation" /> <remove name="Default" /> </listeners> </trace> </system.diagnostics> I have modified the custom trace listener to the following: namespace Pos.Services.Implementation { class TableTraceListener : TraceListener { #region Fields //connection string for azure storage readonly string _connectionString; //Custom sql storage table for logs. //TODO put in config readonly string _diagnosticsTable; [ThreadStatic] static StringBuilder _messageBuffer; readonly object _initializationSection = new object(); bool _isInitialized; CloudTableClient _tableStorage; readonly object _traceLogAccess = new object(); readonly List<LogEntry> _traceLog = new List<LogEntry>(); #endregion #region Constructors public TableTraceListener() : base("TableTraceListener") { _connectionString = RoleEnvironment.GetConfigurationSettingValue("DiagConnection"); _diagnosticsTable = RoleEnvironment.GetConfigurationSettingValue("DiagTableName"); } #endregion #region Methods /// <summary> /// Flushes the entries to the storage table /// </summary> public override void Flush() { if (!_isInitialized) { lock (_initializationSection) { if (!_isInitialized) { Initialize(); } } } var context = _tableStorage.GetTableServiceContext(); context.MergeOption = MergeOption.AppendOnly; lock (_traceLogAccess) { _traceLog.ForEach(entry => context.AddObject(_diagnosticsTable, entry)); _traceLog.Clear(); } if (context.Entities.Count > 0) { context.BeginSaveChangesWithRetries(SaveChangesOptions.None, (ar) => context.EndSaveChangesWithRetries(ar), null); } } /// <summary> /// Creates the storage table object. This class does not need to be locked because the caller is locked. /// </summary> private void Initialize() { var account = CloudStorageAccount.Parse(_connectionString); _tableStorage = account.CreateCloudTableClient(); _tableStorage.GetTableReference(_diagnosticsTable).CreateIfNotExists(); _isInitialized = true; } public override bool IsThreadSafe { get { return true; } } #region Trace and Write Methods /// <summary> /// Writes the message to a string buffer /// </summary> /// <param name="message">the Message</param> public override void Write(string message) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); _messageBuffer.Append(message); } /// <summary> /// Writes the message with a line breaker to a string buffer /// </summary> /// <param name="message"></param> public override void WriteLine(string message) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); _messageBuffer.AppendLine(message); } /// <summary> /// Appends the trace information and message /// </summary> /// <param name="eventCache">the Event Cache</param> /// <param name="source">the Source</param> /// <param name="eventType">the Event Type</param> /// <param name="id">the Id</param> /// <param name="message">the Message</param> public override void TraceEvent(TraceEventCache eventCache, string source, TraceEventType eventType, int id, string message) { base.TraceEvent(eventCache, source, eventType, id, message); AppendEntry(id, eventType, eventCache); } /// <summary> /// Adds the trace information to a collection of LogEntry objects /// </summary> /// <param name="id">the Id</param> /// <param name="eventType">the Event Type</param> /// <param name="eventCache">the EventCache</param> private void AppendEntry(int id, TraceEventType eventType, TraceEventCache eventCache) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); var message = _messageBuffer.ToString(); _messageBuffer.Length = 0; if (message.EndsWith(Environment.NewLine)) message = message.Substring(0, message.Length - Environment.NewLine.Length); if (message.Length == 0) return; var entry = new LogEntry() { PartitionKey = string.Format("{0:D10}", eventCache.Timestamp >> 30), RowKey = string.Format("{0:D19}", eventCache.Timestamp), EventTickCount = eventCache.Timestamp, Level = (int)eventType, EventId = id, Pid = eventCache.ProcessId, Tid = eventCache.ThreadId, Message = message }; lock (_traceLogAccess) _traceLog.Add(entry); } #endregion #endregion } }

    Read the article

  • NetBeans Development 7 - Windows 7 64-bit … JNI native calls ... a how to guide

    - by CirrusFlyer
    I provide this for you to hopefully save you some time and pain. As part of my expereince in getting to know NB Development v7 on my Windows 64-bit workstation I found another frustrating adventure in trying to get the JNI (Java Native Interface) abilities up and working in my project. As such, I am including a brief summary of steps required (as all the documentation I found was completely incorrect for these versions of Windows and NetBeans on how to do JNI). It took a couple of days of experimentation and reviewing every webpage I could find that included these technologies as keyword searches. Yuk!! Not fun. To begin, as NetBeans Development is "all about modules" if you are reading this you probably have a need for one, or more, of your modules to perform JNI calls. Most of what is available on this site or the Internet in general (not to mention the help file in NB7) is either completely wrong for these versions, or so sparse as to be essentially unuseful to anyone other than a JNI expert. Here is what you are looking for ... the "cut to the chase" - "how to guide" to get a JNI call up and working on your NB7 / Windows 64-bit box. 1) From within your NetBeans Module (not the host appliation) declair your native method(s) and make sure you can compile the Java source without errors. Example: package org.mycompanyname.nativelogic; public class NativeInterfaceTest { static { try { if (System.getProperty( "os.arch" ).toLowerCase().equals( "amd64" ) ) System.loadLibrary( <64-bit_folder_name_on_file_system>/<file_name.dll> ); else System.loadLibrary( <32-bit_folder_name_on_file_system>/<file_name.dll> ); } catch (SecurityException se) {} catch (UnsatisfieldLinkError ule) {} catch (NullPointerException npe) {} } public NativeInterfaceTest() {} native String echoString(String s); } Take notice to the fact that we only load the Assembly once (as it's in a static block), because othersise you will throw exceptions if attempting to load it again. Also take note of our single (in this example) native method titled "echoString". This is the method that our C / C++ application is going to implement, then via the majic of JNI we'll call from our Java code. 2) If using a 64-bit version of Windows (which we are here) we need to open a 64-bit Visual Studio Command Prompt (versus the standard 32-bit version), and execute the "vcvarsall" BAT file, along with an "amd64" command line argument, to set the environment up for 64-bit tools. Example: <path_to_Microsoft_Visual_Studio_10.0>/VC/vcvarsall.bat amd64 Take note that you can use any version of the C / C++ compiler from Microsoft you wish. I happen to have Visual Studio 2005, 2008, and 2010 installed on my box so I chose to use "v10.0" but any that support 64-bit development will work fine. The other important aspect here is the "amd64" param. 3) In the Command Prompt change drives \ directories on your computer so that you are at the root of the fully qualified Class location on the file system that contains your native method declairation. Example: The fully qualified class name for my natively declair method is "org.mycompanyname.nativelogic.NativeInterfaceTest". As we successfully compiled our Java in Step 1 above, we should find it contained in our NetBeans Module something similar to the following: "/build/classes/org/mycompanyname/nativelogic/NativeInterfaceTest.class" We need to make sure our Command Prompt sets, as the current directly, "/build/classes" because of our next step. 4) In this step we'll create our C / C++ Header file that contains the JNI required statments. Type the following in the Command Prompt: javah -jni org.mycompanyname.nativelogic.NativeInterfaceTest and hit enter. If you receive any kind of error that states this is an unrecognized command that simply means your Windows computer does not know the PATH to that command (it's in your /bin folder). Either run the command from there, or include the fully qualified path name when invoking this application, or set your computer's PATH environmental variable to include that path in its search. This should produce a file called "org_mycompanyname_nativelogic_NativeInterfaceTest.h" ... a C Header file. I'd make a copy of this in case you need a backup later. 5) Edit the NativeInterfaceTest.h header file and include an implementation for the echoString() method. Example: JNIEXPORT jstring JNICALL Java_org_mycompanyname_nativelogic_NativeInterfaceTest_echoString (JNIEnv *env, jobject jobj, jstring js) { return((*env)->NewStringUTF(env, "My JNI is up and working after lots of research")); } Notice how you can't simply return a normal Java String (because you're in C at the moment). You have to tell the passed in JVM variable to create a Java String for you that will be returned back. Check out the following Oracle web page for other data types and how to create them for JNI purposes. 6) Close and Save your changes to the Header file. Now that you've added an implementation to the Header change the file extention from ".h" to ".c" as it's now a C source code file that properly implements the JNI required interface. Example: NativeInterfaceTest.c 7) We need to compile the newly created source code file and Link it too. From within the Command Prompt type the following: cl /I"path_to_my_jdks_include_folder" /I"path_to_my_jdks_include_win32_folder" /D:AMD64=1 /LD NativeInterfaceTest.c /FeNativeInterfaceTest.dll /link /machine:x64 Example: cl /I"D:/Program Files/Java/jdk1.6.0_21/include" /I"D:/Program Files/java/jdk1.6.0_21/include/win32" /D:AMD64=1 /LD NativeInterfaceTest.c /FeNativeInterfaceTest.dll /link /machine:x64 Notice the quotes around the paths to the 'include" and 'include/win32' folders is required because I have spaces in my folder names ... 'Program Files'. You can include them if you have no spaces without problems, but they are mandatory if you have spaces when using a command prompt. This will generate serveral files, but it's the DLL we're interested in. This is what the System.loadLirbary() java method is looking for. 8) Congratuations! You're at the last step. Simply take the DLL Assembly and paste it at the following location: <path_of_NetBeansProjects_folder>/<project_name>/<module_name>/build/cluster/modules/lib/x64 Note that you'll probably have to create the "lib" and "x64" folders. Example: C:\Users\<user_name>\Documents\NetBeansProjects\<application_name>\<module_name>\build\cluster\modules\lib\x64\NativeInterfaceTest.dll Java code ... notice how we don't inlude the ".dll" file extension in the loadLibrary() call? System.loadLibrary( "/x64/NativeInterfaceTest" ); Now, in your Java code you can create a NativeInterfaceTest object and call the echoString() method and it will return the String value you typed in the NativeInterfaceTest.c source code file. Hopefully this will save you the brain damage I endured trying to figure all this out on my own. Good luck and happy coding!

    Read the article

  • Version Assemblies with TFS 2010 Continuous Integration

    - by Steve Michelotti
    When I first heard that TFS 2010 had moved to Workflow Foundation for Team Build, I was *extremely* skeptical. I’ve loved MSBuild and didn’t quite understand the reasons for this change. In fact, given that I’ve been exclusively using Cruise Control for Continuous Integration (CI) for the last 5+ years of my career, I was skeptical of TFS for CI in general. However, after going through the learning process for TFS 2010 recently, I’m starting to become a believer. I’m also starting to see some of the benefits with Workflow Foundation for the overall processing because it gives you constructs not available in MSBuild such as parallel tasks, better control flow constructs, and a slightly better customization story. The first customization I had to make to the build process was to version the assemblies of my solution. This is not new. In fact, I’d recommend reading Mike Fourie’s well known post on Versioning Code in TFS before you get started. This post describes several foundational aspects of versioning assemblies regardless of your version of TFS. The main points are: 1) don’t use source control operations for your version file, 2) use a schema like <Major>.<Minor>.<IncrementalNumber>.0, and 3) do not keep AssemblyVersion and AssemblyFileVersion in sync. To do this in TFS 2010, the best post I’ve found has been Jim Lamb’s post of building a custom TFS 2010 workflow activity. Overall, this post is excellent but the primary issue I have with it is that the assembly version numbers produced are based in a date and look like this: “2010.5.15.1”. This is definitely not what I want. I want to be able to communicate to the developers and stakeholders that we are producing the “1.1 release” or “1.2 release” – which would have an assembly version number of “1.1.317.0” for example. In this post, I’ll walk through the process of customizing the assembly version number based on this method – customizing the concepts in Lamb’s post to suit my needs. I’ll also be combining this with the concepts of Fourie’s post – particularly with regards to the standards around how to version the assemblies. The first thing I’ll do is add a file called SolutionAssemblyVersionInfo.cs to the root of my solution that looks like this: 1: using System; 2: using System.Reflection; 3: [assembly: AssemblyVersion("1.1.0.0")] 4: [assembly: AssemblyFileVersion("1.1.0.0")] I’ll then add that file as a Visual Studio link file to each project in my solution by right-clicking the project, “Add – Existing Item…” then when I click the SolutionAssemblyVersionInfo.cs file, making sure I “Add As Link”: Now the Solution Explorer will show our file. We can see that it’s a “link” file because of the black arrow in the icon within all our projects. Of course you’ll need to remove the AssemblyVersion and AssemblyFileVersion attributes from the AssemblyInfo.cs files to avoid the duplicate attributes since they now leave in the SolutionAssemblyVersionInfo.cs file. This is an extremely common technique so that all the projects in our solution can be versioned as a unit. At this point, we’re ready to write our custom activity. The primary consideration is that I want the developer and/or tech lead to be able to easily be in control of the Major.Minor and then I want the CI process to add the third number with a unique incremental number. We’ll leave the fourth position always “0” for now – it’s held in reserve in case the day ever comes where we need to do an emergency patch to Production based on a branched version.   Writing the Custom Workflow Activity Similar to Lamb’s post, I’m going to write two custom workflow activities. The “outer” activity (a xaml activity) will be pretty straight forward. It will check if the solution version file exists in the solution root and, if so, delegate the replacement of version to the AssemblyVersionInfo activity which is a CodeActivity highlighted in red below:   Notice that the arguments of this activity are the “solutionVersionFile” and “tfsBuildNumber” which will be passed in. The tfsBuildNumber passed in will look something like this: “CI_MyApplication.4” and we’ll need to grab the “4” (i.e., the incremental revision number) and put that in the third position. Then we’ll need to honor whatever was specified for Major.Minor in the SolutionAssemblyVersionInfo.cs file. For example, if the SolutionAssemblyVersionInfo.cs file had “1.1.0.0” for the AssemblyVersion (as shown in the first code block near the beginning of this post), then we want to resulting file to have “1.1.4.0”. Before we do anything, let’s put together a unit test for all this so we can know if we get it right: 1: [TestMethod] 2: public void Assembly_version_should_be_parsed_correctly_from_build_name() 3: { 4: // arrange 5: const string versionFile = "SolutionAssemblyVersionInfo.cs"; 6: WriteTestVersionFile(versionFile); 7: var activity = new VersionAssemblies(); 8: var arguments = new Dictionary<string, object> { 9: { "tfsBuildNumber", "CI_MyApplication.4"}, 10: { "solutionVersionFile", versionFile} 11: }; 12:   13: // act 14: var result = WorkflowInvoker.Invoke(activity, arguments); 15:   16: // assert 17: Assert.AreEqual("1.2.4.0", (string)result["newAssemblyFileVersion"]); 18: var lines = File.ReadAllLines(versionFile); 19: Assert.IsTrue(lines.Contains("[assembly: AssemblyVersion(\"1.2.0.0\")]")); 20: Assert.IsTrue(lines.Contains("[assembly: AssemblyFileVersion(\"1.2.4.0\")]")); 21: } 22: 23: private void WriteTestVersionFile(string versionFile) 24: { 25: var fileContents = "using System.Reflection;\n" + 26: "[assembly: AssemblyVersion(\"1.2.0.0\")]\n" + 27: "[assembly: AssemblyFileVersion(\"1.2.0.0\")]"; 28: File.WriteAllText(versionFile, fileContents); 29: }   At this point, the code for our AssemblyVersion activity is pretty straight forward: 1: [BuildActivity(HostEnvironmentOption.Agent)] 2: public class AssemblyVersionInfo : CodeActivity 3: { 4: [RequiredArgument] 5: public InArgument<string> FileName { get; set; } 6:   7: [RequiredArgument] 8: public InArgument<string> TfsBuildNumber { get; set; } 9:   10: public OutArgument<string> NewAssemblyFileVersion { get; set; } 11:   12: protected override void Execute(CodeActivityContext context) 13: { 14: var solutionVersionFile = this.FileName.Get(context); 15: 16: // Ensure that the file is writeable 17: var fileAttributes = File.GetAttributes(solutionVersionFile); 18: File.SetAttributes(solutionVersionFile, fileAttributes & ~FileAttributes.ReadOnly); 19:   20: // Prepare assembly versions 21: var majorMinor = GetAssemblyMajorMinorVersionBasedOnExisting(solutionVersionFile); 22: var newBuildNumber = GetNewBuildNumber(this.TfsBuildNumber.Get(context)); 23: var newAssemblyVersion = string.Format("{0}.{1}.0.0", majorMinor.Item1, majorMinor.Item2); 24: var newAssemblyFileVersion = string.Format("{0}.{1}.{2}.0", majorMinor.Item1, majorMinor.Item2, newBuildNumber); 25: this.NewAssemblyFileVersion.Set(context, newAssemblyFileVersion); 26:   27: // Perform the actual replacement 28: var contents = this.GetFileContents(newAssemblyVersion, newAssemblyFileVersion); 29: File.WriteAllText(solutionVersionFile, contents); 30:   31: // Restore the file's original attributes 32: File.SetAttributes(solutionVersionFile, fileAttributes); 33: } 34:   35: #region Private Methods 36:   37: private string GetFileContents(string newAssemblyVersion, string newAssemblyFileVersion) 38: { 39: var cs = new StringBuilder(); 40: cs.AppendLine("using System.Reflection;"); 41: cs.AppendFormat("[assembly: AssemblyVersion(\"{0}\")]", newAssemblyVersion); 42: cs.AppendLine(); 43: cs.AppendFormat("[assembly: AssemblyFileVersion(\"{0}\")]", newAssemblyFileVersion); 44: return cs.ToString(); 45: } 46:   47: private Tuple<string, string> GetAssemblyMajorMinorVersionBasedOnExisting(string filePath) 48: { 49: var lines = File.ReadAllLines(filePath); 50: var versionLine = lines.Where(x => x.Contains("AssemblyVersion")).FirstOrDefault(); 51:   52: if (versionLine == null) 53: { 54: throw new InvalidOperationException("File does not contain [assembly: AssemblyVersion] attribute"); 55: } 56:   57: return ExtractMajorMinor(versionLine); 58: } 59:   60: private static Tuple<string, string> ExtractMajorMinor(string versionLine) 61: { 62: var firstQuote = versionLine.IndexOf('"') + 1; 63: var secondQuote = versionLine.IndexOf('"', firstQuote); 64: var version = versionLine.Substring(firstQuote, secondQuote - firstQuote); 65: var versionParts = version.Split('.'); 66: return new Tuple<string, string>(versionParts[0], versionParts[1]); 67: } 68:   69: private string GetNewBuildNumber(string buildName) 70: { 71: return buildName.Substring(buildName.LastIndexOf(".") + 1); 72: } 73:   74: #endregion 75: }   At this point the final step is to incorporate this activity into the overall build template. Make a copy of the DefaultTempate.xaml – we’ll call it DefaultTemplateWithVersioning.xaml. Before the build and labeling happens, drag the VersionAssemblies activity in. Then set the LabelName variable to “BuildDetail.BuildDefinition.Name + "-" + newAssemblyFileVersion since the newAssemblyFileVersion was produced by our activity.   Configuring CI Once you add your solution to source control, you can configure CI with the build definition window as shown here. The main difference is that we’ll change the Process tab to reflect a different build number format and choose our custom build process file:   When the build completes, we’ll see the name of our project with the unique revision number:   If we look at the detailed build log for the latest build, we’ll see the label being created with our custom task:     We can now look at the history labels in TFS and see the project name with the labels (the Assignment activity I added to the workflow):   Finally, if we look at the physical assemblies that are produced, we can right-click on any assembly in Windows Explorer and see the assembly version in its properties:   Full Traceability We now have full traceability for our code. There will never be a question of what code was deployed to Production. You can always see the assembly version in the properties of the physical assembly. That can be traced back to a label in TFS where the unique revision number matches. The label in TFS gives you the complete snapshot of the code in your source control repository at the time the code was built. This type of process for full traceability has been used for many years for CI – in fact, I’ve done similar things with CCNet and SVN for quite some time. This is simply the TFS implementation of that pattern. The new features that TFS 2010 give you to make these types of customizations in your build process are quite easy once you get over the initial curve.

    Read the article

  • CodePlex Daily Summary for Saturday, March 06, 2010

    CodePlex Daily Summary for Saturday, March 06, 2010New ProjectsAgr.CQRS: Agr.CQRS is a C# framework for DDD applications that use the Command Query Responsibility Segregation pattern (CQRS) and Event Sourcing. BigDays 2010: Big>Days 2010BizTalk - Controlled Admin: Hi .NET folks, I am planning to start project on a Controlled BizTalk Admin tool. This tool will be useful for the organizations which have "Sh...Blacklist of Providers: Blacklist of Providers - the application for department of warehouse logistics (warehouse) at firms.Career Vector: A job board software.Chargify Demo: This is a sample website for ChargifyConceptual: Concept description and animationEric Hexter: My publicly available source code and examplesFluentNHibernate.Search: A Fluent NHibernate.Search mapping interface for NHibernate provider implementation of Lucene.NET.FreelancePlanner: FreelancePlanner is a project tracking tool for freelance translators.HTMLx - JavaScript on the Server for .NET: HTMLx is a set of libraries based on ASP.NET engine to provide JavaScript programmability on the server side. It allows Web developers to use JavaS...IronMSBuild: IronMSBuild is a custom MSBuild Task, which allows you to execute IronRuby scripts. // have to provide some examples LINQ To Blippr: LINQ to Blippr is an open source LINQ Provider for the micro-reviewing service Blippr. LINQ to Blippr makes it easier and more efficent for develo...Luk@sh's HTML Parser: library that simplifies parsing of the HTML documents, for .NETMeta Choons: Unsure as yet but will be a kind of discogs type site but different..NetWork2: NetWork2Regular Expression Chooser: Simple gui for choosing the regular expressions that have become more than simple.See.Sharper: Hopefully useful C# extensions.SharePoint 2010 Toggle User Interface: Toggle the SharePoint 2010 user interface between the new SharePoint 2010 user interface and SharePoint 2007 user interface.Silverlight DiscussionBoard for SharePoint: This is a sharepoint 3.0 webpart that uses a silverlight treeview to display metadata about sharepoint discussions anduses the html bridge to show...Simple Sales Tracking CRM API Wrapper: The Simple Sales Tracking API Wrapper, enables easy extention development and integration with the hosted service at http://www.simplesalestracking...Syntax4Word: A syntax addin for word 2007.TortoiseHg installer builder: TortoiseHg and Mercurial installer builder for Windowsunbinder: Model un binding for route value dictionariesWindows Workflow Foundation on Codeplex: This site has previews of Workflow features which are released out of band for the purposes of adoption and feedback.XNA RSM Render State Manager: Render state management idea for XNA games. Enables isolation between draw calls whilst reducing DX9 SetRenderState calls to the minimum.New ReleasesAgr.CQRS: Sourcecode package: Agr.CQRS is a C# framework for DDD applications that use the Command Query Responsibility Segregation pattern (CQRS) and Event Sourcing. This dow...Book Cataloger: Preview 0.1.6a: New Features: Export to Word 2007 Bibliography format Dictionary list editors for Binding, Condition Improvements: Stability improved Content ...Braintree Client Library: Braintree-1.1.2: Includes minor enhancements to CreditCard and ValidationErrors to support upcoming example application.CassiniDev - Cassini 3.5 Developers Edition: CassiniDev v3.5.0.5: For usage see Readme.htm in download. New in CassiniDev v3.5.0.5 Reintroduced the Lib project and signed all Implemented the CassiniSqlFixture -...Composure: Calcium-64420-VS2010rc1.NET4.SL3: This is a simple conversion of Calcium (rev 64420) built in VS2010 RC1 against .NET4 and Silverlight 3. No source files were changed and ALL test...Composure: MS AJAX Library (46266) for VS2010 RC1 .NET4: This is a quick port of Microsoft's AJAX Library (rev 46266) for Visual Studio 2010 RC1 built against .NET 4.0. Since this conversion was thrown t...Composure: MS Web Test Lightweight for VS2010 RC1 .NET4: A simple conversion of Microsoft's Web Test Lightweight for Visual Studio 2010 RC1 .NET 4.0. This is part of a larger "special request" conversion...CoNatural Components: CoNatural Components 1.5: Supporting new data types: Added support for binary data types -> binary, varbinary, etc maps to byte[] Now supporting SQL Server 2008 new types ...Extensia: Extensia 2010-03-05: Extensia is a very large list of extension methods and a few helper types. Some extension methods are not practical (e.g. slow) whilst others are....Fluent Assertions: Fluent Assertions release 1.1: In this release, we've worked hard to add some important missing features that we really needed, and also improve resiliance against illegal argume...Fluent Ribbon Control Suite: Fluent Ribbon Control Suite 1.0 RC: Fluent Ribbon Control Suite 1.0 (Release Candidate)Includes: Fluent.dll (with .pdb and .xml, debug and release version) Showcase Application Sa...FluentNHibernate.Search: 0.1 Beta: First beta versionFolderSize: FolderSize.Win32.1.0.7.0: FolderSize.Win32.1.0.6.0 A simple utility intended to be used to scan harddrives for the folders that take most place and display this to the user...Free Silverlight & WPF Chart Control - Visifire: Silverlight and WPF Step Line Chart: Hi, With this release Visifire introduces Step Line Chart. This release also contains fix for the following issues: * In WPF, if AnimatedUpd...Html to OpenXml: HtmlToOpenXml 1.0: The dll library to include in your project. The dll is signed for GAC support. Compiled with .Net 3.5, Dependencies on System.Drawing.dll and Docu...Line Counter: 1.5.1: The Line Counter is a tool to calculate lines of your code files. The tool was written in .NET 2.0. Line Counter 1.5.1 Added outline icons and lin...Lokad Cloud - .NET O/C mapper (object to cloud) for Windows Azure: Lokad.Cloud v1.0.662.1: You can get the most recent release directly from the build server at http://build.lokad.com/distrib/Lokad.Cloud/Lost in Translation: LostInTranslation v0.2: Alpha release: function complete but not UX complete.MDownloader: MDownloader-0.15.7.56349: Supported large file resumption. Fixed minor bugs.Mini C# Lab: Mini CSharp Lab Ver 1.4: The primary new feature of Ver 1.4 is batch mode! Now you can run Mini C# Lab program as a scheduled task, no UI interactivity is needed. Here ar...Mobile Store: First drop: First droppatterns & practices SharePoint Guidance: SPG2010 Drop6: SharePoint Guidance Drop Notes Microsoft patterns and practices ****************************************** ***************************************...Picasa Downloader: PicasaDownloader (41446): Changelog: Replaced some exception messages by a Summary dialog shown after downloading if there have been problems. Corrected the Portable vers...Pod Thrower: Version 1: This is the first release, I'm sure there are bugs, the tool is fully functional and I'm using it currently.PowerShell Provider BizTalk: BizTalkFactory PowerShell Provider - 1.1-snapshot: This release constitutes the latest development snapshot for the Provider. Please, leave feedback and use the Issue Tracker to help improve this pr...Resharper Settings Manager: RSM 1.2.1: This is a bug fix release. Changes Fixed plug-in crash when shared settings file was modified externally.Reusable Library Demo: Reusable Library Demo v1.0.2: A demonstration of reusable abstractions for enterprise application developerSharePoint 2010 Toggle User Interface: SharePoint Toggle User Interface: Release 1.0.0.0Starter Kit Mytrip.Mvc.Entity: Mytrip.Mvc.Entity(net3.5 MySQL) 1.0 Beta: MySQL VS 2008 EF Membership UserManager FileManager Localization Captcha ClientValidation Theme CrossBrowserTortoiseHg: TortoiseHg 1.0: http://bitbucket.org/tortoisehg/stable/wiki/ReleaseNotes Please backup your user Mercurial.ini file and then uninstall any 0.9.X release before in...Visual Studio 2010 and Team Foundation Server 2010 VM Factory: Rangers Virtualization Guidance: Rangers Virtualization Guidance Focused guidance on creating a Rangers base image manually and introduction of PowerShell scripts to automate many ...Visual Studio DSite: Advanced Email Program (Visual Basic 2008): This email program can send email to any one using your email username and email credentials. The email program can also attatch attactments to you...WPF ShaderEffect Generator: WPF ShaderEffect Generator 1.6: Several improvements and bug fixes have gone into the comment parsing code for the registers. The plug-in should now correctly pay attention to th...WSDLGenerator: WSDLGenerator 0.0.0.3: - Fixed SharePoint generated *.wsdl.aspx file - Added commandline option -wsdl which does only generate the wsdl file.Most Popular ProjectsMetaSharpRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)ASP.NETLiveUpload to FacebookMicrosoft SQL Server Community & SamplesMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterFluent AssertionsComposureDiffPlex - a .NET Diff Generator

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • MVC Portable Areas Enhancement &ndash; Embedded Resource Controller

    - by Steve Michelotti
    MvcContrib contains a feature called Portable Areas which I’ve recently blogged about. In short, portable areas provide a way to distribute MVC binary components as simple .NET assemblies where the aspx/ascx files are actually compiled into the assembly as embedded resources. This is an extremely cool feature but once you start building robust portable areas, you’ll also want to be able to access other external files like css and javascript.  After my recent post suggesting portable areas be expanded to include other embedded resources, Eric Hexter asked me if I’d like to contribute the code to MvcContrib (which of course I did!). Embedded resources are stored in a case-sensitive way in .NET assemblies and the existing embedded view engine inside MvcContrib already took this into account. Obviously, we’d want the same case sensitivity handling to be taken into account for any embedded resource so my job consisted of 1) adding the Embedded Resource Controller, and 2) a little refactor to extract the logic that deals with embedded resources so that the embedded view engine and the embedded resource controller could both leverage it and, therefore, keep the code DRY. The embedded resource controller targets these scenarios: External image files that are referenced in an <img> tag External files referenced like css or JavaScript files Image files referenced inside css files Embedded Resources Walkthrough This post will describe a walkthrough of using the embedded resource controller in your portable areas to include the scenarios outlined above. I will build a trivial “Quick Links” widget to illustrate the concepts. The portable area registration is the starting point for all portable areas. The MvcContrib.PortableAreas.EmbeddedResourceController is optional functionality – you must opt-in if you want to use it.  To do this, you simply “register” it by providing a route in your area registration that uses it like this: 1: context.MapRoute("ResourceRoute", "quicklinks/resource/{resourceName}", 2: new { controller = "EmbeddedResource", action = "Index" }, 3: new string[] { "MvcContrib.PortableAreas" }); First, notice that I can specify any route I want (e.g., “quicklinks/resources/…”).  Second, notice that I need to include the “MvcContrib.PortableAreas” namespace as the fourth parameter so that the framework is able to find the EmbeddedResourceController at runtime. The handling of embedded views and embedded resources have now been merged.  Therefore, the call to: 1: RegisterTheViewsInTheEmmeddedViewEngine(GetType()); has now been removed (breaking change).  It has been replaced with: 1: RegisterAreaEmbeddedResources(); Other than that, the portable area registration remains unchanged. The solution structure for the static files in my portable area looks like this: I’ve got a css file in a folder called “Content” as well as a couple of image files in a folder called “images”. To reference these in my aspx/ascx code, all of have to do is this: 1: <link href="<%= Url.Resource("Content.QuickLinks.css") %>" rel="stylesheet" type="text/css" /> 2: <img src="<%= Url.Resource("images.globe.png") %>" /> This results in the following HTML mark up: 1: <link href="/quicklinks/resource/Content.QuickLinks.css" rel="stylesheet" type="text/css" /> 2: <img src="/quicklinks/resource/images.globe.png" /> The Url.Resource() method is now included in MvcContrib as well. Make sure you import the “MvcContrib” namespace in your views. Next, I have to following html to render the quick links: 1: <ul class="links"> 2: <li><a href="http://www.google.com">Google</a></li> 3: <li><a href="http://www.bing.com">Bing</a></li> 4: <li><a href="http://www.yahoo.com">Yahoo</a></li> 5: </ul> Notice the <ul> tag has a class called “links”. This is defined inside my QuickLinks.css file and looks like this: 1: ul.links li 2: { 3: background: url(/quicklinks/resource/images.navigation.png) left 4px no-repeat; 4: padding-left: 20px; 5: margin-bottom: 4px; 6: } On line 3 we’re able to refer to the url for the background property. As a final note, although we already have complete control over the location of the embedded resources inside the assembly, what if we also want control over the physical URL routes as well. This point was raised by John Nelson in this post. This has been taken into account as well. For example, suppose you want your physical url to look like this: 1: <img src="/quicklinks/images/globe.png" /> instead of the same corresponding URL shown above (i.e., “/quicklinks/resources/images.globe.png”). You can do this easily by specifying another route for it which includes a “resourcePath” parameter that is pre-pended. Here is the complete code for the area registration with the custom route for the images shown on lines 9-11: 1: public class QuickLinksRegistration : PortableAreaRegistration 2: { 3: public override void RegisterArea(System.Web.Mvc.AreaRegistrationContext context, IApplicationBus bus) 4: { 5: context.MapRoute("ResourceRoute", "quicklinks/resource/{resourceName}", 6: new { controller = "EmbeddedResource", action = "Index" }, 7: new string[] { "MvcContrib.PortableAreas" }); 8:   9: context.MapRoute("ResourceImageRoute", "quicklinks/images/{resourceName}", 10: new { controller = "EmbeddedResource", action = "Index", resourcePath = "images" }, 11: new string[] { "MvcContrib.PortableAreas" }); 12:   13: context.MapRoute("quicklink", "quicklinks/{controller}/{action}", 14: new {controller = "links", action = "index"}); 15:   16: this.RegisterAreaEmbeddedResources(); 17: } 18:   19: public override string AreaName 20: { 21: get 22: { 23: return "QuickLinks"; 24: } 25: } 26: } The Quick Links portable area results in the following requests (including custom route formats): The complete code for this post is now included in the Portable Areas sample solution in the latest MvcContrib source code. You can get the latest code now.  Portable Areas open up exciting new possibilities for MVC development!

    Read the article

  • VS 2010 SP1 and SQL CE

    - by ScottGu
    Last month we released the Beta of VS 2010 Service Pack 1 (SP1).  You can learn more about the VS 2010 SP1 Beta from Jason Zander’s two blog posts about it, and from Scott Hanselman’s blog post that covers some of the new capabilities enabled with it.   You can download and install the VS 2010 SP1 Beta here. Last week I blogged about the new Visual Studio support for IIS Express that we are adding with VS 2010 SP1. In today’s post I’m going to talk about the new VS 2010 SP1 tooling support for SQL CE, and walkthrough some of the cool scenarios it enables.  SQL CE – What is it and why should you care? SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Easy Migration to SQL Server SQL CE is an embedded database – which makes it ideal for development, testing, and light-usage scenarios.  For high-volume sites and applications you’ll probably want to migrate your database to use SQL Server Express (which is free), SQL Server or SQL Azure.  These servers enable much better scalability, more development features (including features like Stored Procedures – which aren’t supported with SQL CE), as well as more advanced data management capabilities. We’ll ship migration tools that enable you to optionally take SQL CE databases and easily upgrade them to use SQL Server Express, SQL Server, or SQL Azure.  You will not need to change your code when upgrading a SQL CE database to SQL Server or SQL Azure.  Our goal is to enable you to be able to simply change the database connection string in your web.config file and have your application just work. New Tooling Support for SQL CE in VS 2010 SP1 VS 2010 SP1 includes much improved tooling support for SQL CE, and adds support for using SQL CE within ASP.NET projects for the first time.  With VS 2010 SP1 you can now: Create new SQL CE Databases Edit and Modify SQL CE Database Schema and Indexes Populate SQL CE Databases within Data Use the Entity Framework (EF) designer to create model layers against SQL CE databases Use EF Code First to define model layers in code, then create a SQL CE database from them, and optionally edit the DB with VS Deploy SQL CE databases to remote servers using Web Deploy and optionally convert them to full SQL Server databases You can take advantage of all of the above features from within both ASP.NET Web Forms and ASP.NET MVC based projects. Download You can enable SQL CE tooling support within VS 2010 by first installing VS 2010 SP1 (beta). Once SP1 is installed, you’ll also then need to install the SQL CE Tools for Visual Studio download.  This is a separate download that enables the SQL CE tooling support for VS 2010 SP1. Walkthrough of Two Scenarios In this blog post I’m going to walkthrough how you can take advantage of SQL CE and VS 2010 SP1 using both an ASP.NET Web Forms and an ASP.NET MVC based application. Specifically, we’ll walkthrough: How to create a SQL CE database using VS 2010 SP1, then use the EF4 visual designers in Visual Studio to construct a model layer from it, and then display and edit the data using an ASP.NET GridView control. How to use an EF Code First approach to define a model layer using POCO classes and then have EF Code-First “auto-create” a SQL CE database for us based on our model classes.  We’ll then look at how we can use the new VS 2010 SP1 support for SQL CE to inspect the database that was created, populate it with data, and later make schema changes to it.  We’ll do all this within the context of an ASP.NET MVC based application. You can follow the two walkthroughs below on your own machine by installing VS 2010 SP1 (beta) and then installing the SQL CE Tools for Visual Studio download (which is a separate download that enables SQL CE tooling support for VS 2010 SP1). Walkthrough 1: Create a SQL CE Database, Create EF Model Classes, Edit the Data with a GridView This first walkthrough will demonstrate how to create and define a SQL CE database within an ASP.NET Web Form application.  We’ll then build an EF model layer for it and use that model layer to enable data editing scenarios with an <asp:GridView> control. Step 1: Create a new ASP.NET Web Forms Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET Web Forms project.  We’ll use the “ASP.NET Web Application” project template option so that it has a default UI skin implemented: Step 2: Create a SQL CE Database Right click on the “App_Data” folder within the created project and choose the “Add->New Item” menu command: This will bring up the “Add Item” dialog box.  Select the “SQL Server Compact 4.0 Local Database” item (new in VS 2010 SP1) and name the database file to create “Store.sdf”: Note that SQL CE database files have a .sdf filename extension. Place them within the /App_Data folder of your ASP.NET application to enable easy deployment. When we clicked the “Add” button above a Store.sdf file was added to our project: Step 3: Adding a “Products” Table Double-clicking the “Store.sdf” database file will open it up within the Server Explorer tab.  Since it is a new database there are no tables within it: Right click on the “Tables” icon and choose the “Create Table” menu command to create a new database table.  We’ll name the new table “Products” and add 4 columns to it.  We’ll mark the first column as a primary key (and make it an identify column so that its value will automatically increment with each new row): When we click “ok” our new Products table will be created in the SQL CE database. Step 4: Populate with Data Once our Products table is created it will show up within the Server Explorer.  We can right-click it and choose the “Show Table Data” menu command to edit its data: Let’s add a few sample rows of data to it: Step 5: Create an EF Model Layer We have a SQL CE database with some data in it – let’s now create an EF Model Layer that will provide a way for us to easily query and update data within it. Let’s right-click on our project and choose the “Add->New Item” menu command.  This will bring up the “Add New Item” dialog – select the “ADO.NET Entity Data Model” item within it and name it “Store.edmx” This will add a new Store.edmx item to our solution explorer and launch a wizard that allows us to quickly create an EF model: Select the “Generate From Database” option above and click next.  Choose to use the Store.sdf SQL CE database we just created and then click next again.  The wizard will then ask you what database objects you want to import into your model.  Let’s choose to import the “Products” table we created earlier: When we click the “Finish” button Visual Studio will open up the EF designer.  It will have a Product entity already on it that maps to the “Products” table within our SQL CE database: The VS 2010 SP1 EF designer works exactly the same with SQL CE as it does already with SQL Server and SQL Express.  The Product entity above will be persisted as a class (called “Product”) that we can programmatically work against within our ASP.NET application. Step 6: Compile the Project Before using your model layer you’ll need to build your project.  Do a Ctrl+Shift+B to compile the project, or use the Build->Build Solution menu command. Step 7: Create a Page that Uses our EF Model Layer Let’s now create a simple ASP.NET Web Form that contains a GridView control that we can use to display and edit the our Products data (via the EF Model Layer we just created). Right-click on the project and choose the Add->New Item command.  Select the “Web Form from Master Page” item template, and name the page you create “Products.aspx”.  Base the master page on the “Site.Master” template that is in the root of the project. Add an <h2>Products</h2> heading the new Page, and add an <asp:gridview> control within it: Then click the “Design” tab to switch into design-view. Select the GridView control, and then click the top-right corner to display the GridView’s “Smart Tasks” UI: Choose the “New data source…” drop down option above.  This will bring up the below dialog which allows you to pick your Data Source type: Select the “Entity” data source option – which will allow us to easily connect our GridView to the EF model layer we created earlier.  This will bring up another dialog that allows us to pick our model layer: Select the “StoreEntities” option in the dropdown – which is the EF model layer we created earlier.  Then click next – which will allow us to pick which entity within it we want to bind to: Select the “Products” entity in the above dialog – which indicates that we want to bind against the “Product” entity class we defined earlier.  Then click the “Enable automatic updates” checkbox to ensure that we can both query and update Products.  When you click “Finish” VS will wire-up an <asp:EntityDataSource> to your <asp:GridView> control: The last two steps we’ll do will be to click the “Enable Editing” checkbox on the Grid (which will cause the Grid to display an “Edit” link on each row) and (optionally) use the Auto Format dialog to pick a UI template for the Grid. Step 8: Run the Application Let’s now run our application and browse to the /Products.aspx page that contains our GridView.  When we do so we’ll see a Grid UI of the Products within our SQL CE database. Clicking the “Edit” link for any of the rows will allow us to edit their values: When we click “Update” the GridView will post back the values, persist them through our EF Model Layer, and ultimately save them within our SQL CE database. Learn More about using EF with ASP.NET Web Forms Read this tutorial series on the http://asp.net site to learn more about how to use EF with ASP.NET Web Forms.  The tutorial series uses SQL Express as the database – but the nice thing is that all of the same steps/concepts can also now also be done with SQL CE.   Walkthrough 2: Using EF Code-First with SQL CE and ASP.NET MVC 3 We used a database-first approach with the sample above – where we first created the database, and then used the EF designer to create model classes from the database.  In addition to supporting a designer-based development workflow, EF also enables a more code-centric option which we call “code first development”.  Code-First Development enables a pretty sweet development workflow.  It enables you to: Define your model objects by simply writing “plain old classes” with no base classes or visual designer required Use a “convention over configuration” approach that enables database persistence without explicitly configuring anything Optionally override the convention-based persistence and use a fluent code API to fully customize the persistence mapping Optionally auto-create a database based on the model classes you define – allowing you to start from code first I’ve done several blog posts about EF Code First in the past – I really think it is great.  The good news is that it also works very well with SQL CE. The combination of SQL CE, EF Code First, and the new VS tooling support for SQL CE, enables a pretty nice workflow.  Below is a simple example of how you can use them to build a simple ASP.NET MVC 3 application. Step 1: Create a new ASP.NET MVC 3 Project We’ll begin by using the File->New Project menu command within Visual Studio to create a new ASP.NET MVC 3 project.  We’ll use the “Internet Project” template so that it has a default UI skin implemented: Step 2: Use NuGet to Install EFCodeFirst Next we’ll use the NuGet package manager (automatically installed by ASP.NET MVC 3) to add the EFCodeFirst library to our project.  We’ll use the Package Manager command shell to do this.  Bring up the package manager console within Visual Studio by selecting the View->Other Windows->Package Manager Console menu command.  Then type: install-package EFCodeFirst within the package manager console to download the EFCodeFirst library and have it be added to our project: When we enter the above command, the EFCodeFirst library will be downloaded and added to our application: Step 3: Build Some Model Classes Using a “code first” based development workflow, we will create our model classes first (even before we have a database).  We create these model classes by writing code. For this sample, we will right click on the “Models” folder of our project and add the below three classes to our project: The “Dinner” and “RSVP” model classes above are “plain old CLR objects” (aka POCO).  They do not need to derive from any base classes or implement any interfaces, and the properties they expose are standard .NET data-types.  No data persistence attributes or data code has been added to them.   The “NerdDinners” class derives from the DbContext class (which is supplied by EFCodeFirst) and handles the retrieval/persistence of our Dinner and RSVP instances from a database. Step 4: Listing Dinners We’ve written all of the code necessary to implement our model layer for this simple project.  Let’s now expose and implement the URL: /Dinners/Upcoming within our project.  We’ll use it to list upcoming dinners that happen in the future. We’ll do this by right-clicking on our “Controllers” folder and select the “Add->Controller” menu command.  We’ll name the Controller we want to create “DinnersController”.  We’ll then implement an “Upcoming” action method within it that lists upcoming dinners using our model layer above.  We will use a LINQ query to retrieve the data and pass it to a View to render with the code below: We’ll then right-click within our Upcoming method and choose the “Add-View” menu command to create an “Upcoming” view template that displays our dinners.  We’ll use the “empty” template option within the “Add View” dialog and write the below view template using Razor: Step 4: Configure our Project to use a SQL CE Database We have finished writing all of our code – our last step will be to configure a database connection-string to use. We will point our NerdDinners model class to a SQL CE database by adding the below <connectionString> to the web.config file at the top of our project: EF Code First uses a default convention where context classes will look for a connection-string that matches the DbContext class name.  Because we created a “NerdDinners” class earlier, we’ve also named our connectionstring “NerdDinners”.  Above we are configuring our connection-string to use SQL CE as the database, and telling it that our SQL CE database file will live within the \App_Data directory of our ASP.NET project. Step 5: Running our Application Now that we’ve built our application, let’s run it! We’ll browse to the /Dinners/Upcoming URL – doing so will display an empty list of upcoming dinners: You might ask – but where did it query to get the dinners from? We didn’t explicitly create a database?!? One of the cool features that EF Code-First supports is the ability to automatically create a database (based on the schema of our model classes) when the database we point it at doesn’t exist.  Above we configured  EF Code-First to point at a SQL CE database in the \App_Data\ directory of our project.  When we ran our application, EF Code-First saw that the SQL CE database didn’t exist and automatically created it for us. Step 6: Using VS 2010 SP1 to Explore our newly created SQL CE Database Click the “Show all Files” icon within the Solution Explorer and you’ll see the “NerdDinners.sdf” SQL CE database file that was automatically created for us by EF code-first within the \App_Data\ folder: We can optionally right-click on the file and “Include in Project" to add it to our solution: We can also double-click the file (regardless of whether it is added to the project) and VS 2010 SP1 will open it as a database we can edit within the “Server Explorer” tab of the IDE. Below is the view we get when we double-click our NerdDinners.sdf SQL CE file.  We can drill in to see the schema of the Dinners and RSVPs tables in the tree explorer.  Notice how two tables - Dinners and RSVPs – were automatically created for us within our SQL CE database.  This was done by EF Code First when we accessed the NerdDinners class by running our application above: We can right-click on a Table and use the “Show Table Data” command to enter some upcoming dinners in our database: We’ll use the built-in editor that VS 2010 SP1 supports to populate our table data below: And now when we hit “refresh” on the /Dinners/Upcoming URL within our browser we’ll see some upcoming dinners show up: Step 7: Changing our Model and Database Schema Let’s now modify the schema of our model layer and database, and walkthrough one way that the new VS 2010 SP1 Tooling support for SQL CE can make this easier.  With EF Code-First you typically start making database changes by modifying the model classes.  For example, let’s add an additional string property called “UrlLink” to our “Dinner” class.  We’ll use this to point to a link for more information about the event: Now when we re-run our project, and visit the /Dinners/Upcoming URL we’ll see an error thrown: We are seeing this error because EF Code-First automatically created our database, and by default when it does this it adds a table that helps tracks whether the schema of our database is in sync with our model classes.  EF Code-First helpfully throws an error when they become out of sync – making it easier to track down issues at development time that you might otherwise only find (via obscure errors) at runtime.  Note that if you do not want this feature you can turn it off by changing the default conventions of your DbContext class (in this case our NerdDinners class) to not track the schema version. Our model classes and database schema are out of sync in the above example – so how do we fix this?  There are two approaches you can use today: Delete the database and have EF Code First automatically re-create the database based on the new model class schema (losing the data within the existing DB) Modify the schema of the existing database to make it in sync with the model classes (keeping/migrating the data within the existing DB) There are a couple of ways you can do the second approach above.  Below I’m going to show how you can take advantage of the new VS 2010 SP1 Tooling support for SQL CE to use a database schema tool to modify our database structure.  We are also going to be supporting a “migrations” feature with EF in the future that will allow you to automate/script database schema migrations programmatically. Step 8: Modify our SQL CE Database Schema using VS 2010 SP1 The new SQL CE Tooling support within VS 2010 SP1 makes it easy to modify the schema of our existing SQL CE database.  To do this we’ll right-click on our “Dinners” table and choose the “Edit Table Schema” command: This will bring up the below “Edit Table” dialog.  We can rename, change or delete any of the existing columns in our table, or click at the bottom of the column listing and type to add a new column.  Below I’ve added a new “UrlLink” column of type “nvarchar” (since our property is a string): When we click ok our database will be updated to have the new column and our schema will now match our model classes. Because we are manually modifying our database schema, there is one additional step we need to take to let EF Code-First know that the database schema is in sync with our model classes.  As i mentioned earlier, when a database is automatically created by EF Code-First it adds a “EdmMetadata” table to the database to track schema versions (and hash our model classes against them to detect mismatches between our model classes and the database schema): Since we are manually updating and maintaining our database schema, we don’t need this table – and can just delete it: This will leave us with just the two tables that correspond to our model classes: And now when we re-run our /Dinners/Upcoming URL it will display the dinners correctly: One last touch we could do would be to update our view to check for the new UrlLink property and render a <a> link to it if an event has one: And now when we refresh our /Dinners/Upcoming we will see hyperlinks for the events that have a UrlLink stored in the database: Summary SQL CE provides a free, embedded, database engine that you can use to easily enable database storage.  With SQL CE 4 you can now take advantage of it within ASP.NET projects and applications (both Web Forms and MVC). VS 2010 SP1 provides tooling support that enables you to easily create, edit and modify SQL CE databases – as well as use the standard EF designer against them.  This allows you to re-use your existing skills and data knowledge while taking advantage of an embedded database option.  This is useful both for small applications (where you don’t need the scalability of a full SQL Server), as well as for development and testing scenarios – where you want to be able to rapidly develop/test your application without having a full database instance.  SQL CE makes it easy to later migrate your data to a full SQL Server or SQL Azure instance if you want to – without having to change any code in your application.  All we would need to change in the above two scenarios is the <connectionString> value within the web.config file in order to have our code run against a full SQL Server.  This provides the flexibility to scale up your application starting from a small embedded database solution as needed. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 549 550 551 552 553 554 555 556 557 558 559 560  | Next Page >