Search Results

Search found 3097 results on 124 pages for 'ben alter'.

Page 116/124 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Hylafax / Capi4hylafax: faxgetty does not recognize number of lines

    - by Wrikken
    We've got a T.30 card, 30 working lines on it, but for some reason, if I add more then 30 faxes in the queue at any time (and we're busy enough at peak times that this happens a lot), faxgetty sends faxes to non-existent lines and they appear in the error queue as a 'busy' signal on the line, which results in a lot of failed faxes because the counter of max 3 tries increases rapidly. This is using faxgetty (USE_FAXGETTY="y" in /etc/default/hylafax). I've inherited this thing, so I'm not entirely sure how faxgetty is supposed to know the number of lines. However, if I alter the script to faxmodem (USE_FAXGETTY="n" in /etc/default/hylafax and manually enabling 30 modems), this behavior goes away (new faxes 'wait' for a line to be available before trying to send, so each try / fail is a valid one on a working line, majorly descreasing the amount of failed faxes. However, when researching this almost anyone talks about faxgetty being the preferred, more robust, method, and on top of that for some unexplained reason all FIFO's disappeared for some reason after several errorless hours with faxmodem, forcing a hylafax restart using faxgetty until we figured out why this faxmodem solution failed (which is another question, and somewhat out of scope here). Environment: Debian 2.6.26-2-amd64 capi4hylafax 1:01.03.00.99.svn.300-12 hylafax-client 2:4.4.4-10.1 hylafax-server 2:4.4.4-10.1 Config --hfaxd.conf-- LogFacility: daemon ServerTracing: 0x1ff --hyla.conf-- Host: localhost Verbose: No VRes: 196 TimeZone: local DialRules: "/etc/hylafax/dialrules.europe" --/etc/hylafax/config -- InternationalPrefix: 00 LongDistancePrefix: 0 AreaCode: 99999 CountryCode: 31 DialStringRules: "etc/dialrules.europe" ModemGroup: any:faxCAPI SendFaxCmd: "/usr/bin/wrapc2faxsend" --/etc/hylafax/config.faxCAPI -- SpoolDir: /var/spool/hylafax FaxRcvdCmd: /var/spool/hylafax/bin/faxrcvd PollRcvdCmd: /var/spool/hylafax/bin/pollrcvd FaxReceiveUser: uucp FaxReceiveGroup: dialout LogFile: /var/spool/hylafax/log/capi4hylafax #no, checking this log did not yield anything interesting LogTraceLevel: 4 LogFileMode: 0600 ModemGroup: any:faxCAPI #repeats of faxCAPI2 = faxCAPI30, with of course another devicename/local ident: { HylafaxDeviceName: faxCAPI RecvFileMode: 0600 FAXNumber: ****redacted**** LocalIdentifier: ****some-ident-per-device*** MaxConcurrentRecvs: 0 OutgoingController: 1 OutgoingMSN: SuppressMSN: 0 NumberPrefix: NumberPlusReplacer: "00" UseISDNFaxService: 0 RingingDuration: 0 { Controller: 1 AcceptSpeech: 0 UseDDI: 0 DDIOffset: DDILength: 0 IncomingDDIs: IncomingMSNs: AcceptGlobalCall: 1 } } So in short: How does faxgetty determine the number of lines available? (the man page isn't terribly revealing, and I can't find an appropriate setting in hylafax-config. And how can I get a capi4hylafax/hylafax setup which queues more faxes then lines are available correctly without immediately incrementing the fail count? We will not be receiving any faxes on this machine b.t.w. As I said, I've inherited this thing, so if there are important configuration options I'm not including, please let me know.

    Read the article

  • Having troubles connectiong Magento to external Windows Database Server using Windows Azure

    - by Kevin H
    "I tried to make this easy to read through" I am using Ubuntu 12.04 LTS for Magento and installed these commands onto the system: sudo apt-get install apache2 sudo apt-get install php5 libapache2-mod-php5 sudo apt-get install php5-mysql sudo apt-get install php5-curl php5-mcrypt php5-gd php5-common sudo apt-get install php5-gd I used Windows Server 2008 R2 August 2012 for Mysql Server For a reference, I used http://www.windowsazure.com/en-us/manage/windows/common-tasks/install-mysql/ When the server was setup, I added an empty disk to it Then, I added endpoints 3306 Next I accessed the server remotely After that, I formatted the empty disk and was inserted as F: Next I downloaded Mysql from http://*.mysql.com version Windows (x86, 64-bit), MSI Installer 5.5.28 In the installation process, I used these settings: Typical Setup - Clicked Next, install, next Chose Detailed Configuration - Clicked next Chose Dedicated MySQL Server Machine - Clicked Next Chose Transactional Database Only - Clicked Next Chose the "F:" Drive - Clicked Next Chose Online Transactional Processing (OLTP) - Clicked Next For Networking Options, I checkmarked 'Enable TCP/IP Networking" 'Add firewall exception for this port' 'Enable Strict Mode' - Clicked Next Chose Standard Character Set - Clicked Next For Windows Options, I checkedmarked 'Install as Window Service" 'Launch the MySQL Server automatically' 'Include Bin Directory in Windows PATH - Clicked Next For Security Options, I checkmarked 'Modify Security Settings' and set root password - Clicked Next Finally clicked Execute and Finish These are the Firewall Setting that I set I clicked inbound rules Properties Scope Allow IP Address and used the internal Address for Magento Server Clicked Apply and exited Next, I opened up MySQL 5.x Command Line Client Entered Root Password Then entered these commands mysql create database magento; mysql Create user magentouser identified by 'password'; mysql Grant select, insert, create, alter, update, delete, lock tables on magento.* to magentouser mysql exit Finally, I opened up the Magento Downloader Magento validation has approved all PHP version is right. Your version is 5.3.10-1ubuntu3.4. PHP Extension curl is loaded PHP Extension dom is loaded PHP Extension gd is loaded PHP Extension hash is loaded PHP Extension iconv is loaded PHP Extension mcrypt is loaded PHP Extension pcre is loaded PHP Extension pdo is loaded PHP Extension pdo_mysql is loaded PHP Extension simplexml is loaded These are all installed on Magento Server For the Database Connection, I used: The Database server only has MySQL 5.5 Server installed on it Host - Internal IP address User Name - The User I created when setting up database Password - The Password I created when setting up database For the password, I did some research and found out that Magento only accepts alphanumeric, so I went and set it up again and used only alphanumeric for the User password Now, I am still getting Accessed denied for database Connection. Also, I have tryed to setup mysql on independant Linux Server but kept getting errors. When, I found the solution. Wouldn't work, so I decided to try Windows. These is the questions, I have been asking and researching to debug this issue Is it because I am using Linux for magento and Windows for Database. I have had no luck in finding a reason why this wouldn't work There must be something, I am missing I also researched the difference between linux sql databases and windows sql databases but have not come to conclusion, if installing Mysql on windows would make a difference in syntax and coding. I have spent a lot of time looking into this and need some help with direction on how to complete my project. Any type of help would be appreciated.

    Read the article

  • Apache works on http and https, SVN only on http

    - by user27880
    I asked a question about this before, and got most of it fixed. If I switch off https redirect and go to http://mydomain.com/svn/test0, I get the authentication window popping up, and I can enter my AD credentials, and bingo. Switching https redirect back on, if I go to http://mydomain.com I am automatically redirected to https, which is what I want, and the 'CerntOS test page' pops up. Perfect. The problem occurs when I want to go to one of my test repos via https. Here is my httpd.conf file, with confidential information suitably hosed... === NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName svn.mycompany.com ErrorLog logs/subversion-error_log CustomLog logs/subversion-access_log common Redirect permanent / https://svn.mycompany.com </VirtualHost> <VirtualHost svn.mycompany.com:443> SSLEngine On SSLCertificateFile /etc/httpd/ssl/wildcard.mycompany.com.crt SSLCertificateKeyFile /etc/httpd/ssl/wildcard.mycompany.com.key SSLCertificateChainFile /etc/httpd/ssl/intermediate.crt ServerName svn.mycompany.com ServerAdmin [email protected] ErrorLog logs/subversion-error_log CustomLog logs/subversion-access_log common <Location /svn> DAV svn SVNParentPath /usr/local/subversion SVNListParentPath off AuthName "Subversion Repositories" # NT Logon Details Require valid-user AuthBasicProvider file ldap AuthType Basic AuthzLDAPAuthoritative off AuthUserFile /etc/httpd/conf/svnpasswd AuthName "Subversion Server II" AuthLDAPURL "ldap://our-pdc:389/OU=Company Name,DC=com,DC=co,DC=uk?sAMAccountName?sub?(objectClass=*)" AuthLDAPBindDN "DOMAIN\subversion" AuthLDAPBindPassword XXXXXXX AuthzSVNAccessFile /etc/httpd/conf/svnaccessfile </Location> </VirtualHost> === Now, in ssl_error_log, I get === ==> /etc/httpd/logs/ssl_error_log <== [Fri Nov 01 16:07:55 2013] [error] [client XXX.XXX.XXX.XXX] File does not exist: /var/www/html/svn === This comes from the DocumentRoot directive further up the httpd.conf file, which of course points to /var/www/html. I know that this location is wrong, but how can I get SVN to serve the repo? I tried an Alias directive as so .. Alias /svn /usr/local/subversion .. but this didn't work. I tried to alter the Location directive. That didn't work either. Can someone help? I sense that this is so close to being solved ... Thanks. Edit: apachectl -S output: [root@svn conf]# apachectl -S VirtualHost configuration: 127.0.0.1:443 svn.mycompany.com (/etc/httpd/conf/httpd.conf:1020) wildcard NameVirtualHosts and default servers: default:443 svn.mycompany.com (/etc/httpd/conf.d/ssl.conf:74) *:80 is a NameVirtualHost default server svn.mycompany.com (/etc/httpd/conf/httpd.conf:1012) port 80 namevhost svn.mycompany.com (/etc/httpd/conf/httpd.conf:1012) Syntax OK

    Read the article

  • How to setup ssh's umask for all type of connections

    - by Unode
    I've been searching for a way to setup OpenSSH's umask to 0027 in a consistent way across all connection types. By connection types I'm referring to: sftp scp ssh hostname ssh hostname program The difference between 3. and 4. is that the former starts a shell which usually reads the /etc/profile information while the latter doesn't. In addition by reading this post I've became aware of the -u option that is present in newer versions of OpenSSH. However this doesn't work. I must also add that /etc/profile now includes umask 0027. Going point by point: sftp - Setting -u 0027 in sshd_config as mentioned here, is not enough. If I don't set this parameter, sftp uses by default umask 0022. This means that if I have the two files: -rwxrwxrwx 1 user user 0 2011-01-29 02:04 execute -rw-rw-rw- 1 user user 0 2011-01-29 02:04 read-write When I use sftp to put them in the destination machine I actually get: -rwxr-xr-x 1 user user 0 2011-01-29 02:04 execute -rw-r--r-- 1 user user 0 2011-01-29 02:04 read-write However when I set -u 0027 on sshd_config of the destination machine I actually get: -rwxr--r-- 1 user user 0 2011-01-29 02:04 execute -rw-r--r-- 1 user user 0 2011-01-29 02:04 read-write which is not expected, since it should actually be: -rwxr-x--- 1 user user 0 2011-01-29 02:04 execute -rw-r----- 1 user user 0 2011-01-29 02:04 read-write Anyone understands why this happens? scp - Independently of what is setup for sftp, permissions are always umask 0022. I currently have no idea how to alter this. ssh hostname - no problem here since the shell reads /etc/profile by default which means umask 0027 in the current setup. ssh hostname program - same situation as scp. In sum, setting umask on sftp alters the result but not as it should, ssh hostname works as expected reading /etc/profile and both scp and ssh hostname program seem to have umask 0022 hardcoded somewhere. Any insight on any of the above points is welcome. EDIT: I would like to avoid patches that require manually compiling openssh. The system is running Ubuntu Server 10.04.01 (lucid) LTS with openssh packages from maverick. Answer: As indicated by poige, using pam_umask did the trick. The exact changes were: Lines added to /etc/pam.d/sshd: # Setting UMASK for all ssh based connections (ssh, sftp, scp) session optional pam_umask.so umask=0027 Also, in order to affect all login shells regardless of if they source /etc/profile or not, the same lines were also added to /etc/pam.d/login. EDIT: After some of the comments I retested this issue. At least in Ubuntu (where I tested) it seems that if the user has a different umask set in their shell's init files (.bashrc, .zshrc,...), the PAM umask is ignored and the user defined umask used instead. Changes in /etc/profile did't affect the outcome unless the user explicitly sources those changes in the init files. It is unclear at this point if this behavior happens in all distros.

    Read the article

  • configuring mod_proxy_html properly?

    - by tobinjim
    I have an apache2 web server that handles reverse proxy for Rails3 app running on another machine. The setup works except URLs generated within the webapp aren't getting rewritten by my configuration for mod_proxy_html. The ["Reverse Proxy Scenario"][1] is exactly what I'm trying to do, so I've followed the tutorial as completely as I know how. I've applied or tried answers supplied here on stackoverflow, to no effect. According to the "Reverse Proxy Scenario" you want a number of modules loaded. All those instructions are in my httpd.conf file and when I examine the output from apactectl -t -D DUMP_MODULES all the expected modules show in amongst the listing. My external web server doing the reverse proxy is at www.ourdomain.org and the Rails app is internally available at apphost.local (the server is Mac OS X Server 10.6, the rails app server is Mac OS X 10.6). What's working right now is access to the webapp via the reverse proxy as: http://www.ourdomain.org/apphost/railsappname/controllername/action But none of the javascript files, css files or other assets get loaded, and links internal to the web app come out missing the apphost portion of the URL, as if my rewrite rule is configured incorrectly (so of course I've focused on that and can't seem to get anything to be added or deleted in the process of passing the html in from the apphost and out through the Apache server). For instance, hovering over an action link in the html returned by the web app you'll get: http://www.ourdomain.org/railsappname/controllername/action Here's what my Apache directives look like: LoadModule proxy_html_module /usr/libexec/apache2/mod_proxy_html.so LoadModule xml2enc_module /usr/libexec/apache2/mod_xml2enc.so ProxyHTMLLogVerbose On LogLevel Debug ProxyPass /apphost/ http://apphost.local/ <Location /apphost/> SetOutputFilter INFLATE;proxy-html;DEFLATE ProxyPassReverse / ProxyHTMLExtended On ProxyHTMLURLMap railsappname/ apphost/railsappname/ RequestHeader unset Accept-Encoding </Location> After every change I make to httpd.conf I religiously check apachectl -t just to be sane. I'm definitely not an Apache expert, but all the directives that follow mine seem to not overrule what I'm doing here. But then nothing that I try seems to alter the URLs I see in my browser after hitting the Apache server with a request for my web app. Even if you can't tell what I've done incorrectly, I'd welcome ideas on how to get Apache to help see what it's working on and doing to the html coming from my web app. That's what I understood the ProxyHTMLLogVerbose On and LogLevel Debug to be setting up, but I'm not seeing anything in the log files.

    Read the article

  • SQL Server database filled the hard drive and freeing up space isn't possible

    - by Jon
    I have a database in SQL Server 2008 on a 1Tb hard drive and it filled the drive, there is only 4Kb free. The MDF file is 323Gb and the LDF is 653Gb. The hard disk this DB is on has no other files on it other than the MDF and LDF so it's impossible to free up any space on the drive. The main hard disk is smaller but there is enough room to transfer the MDF to that drive, in case that helps. This server is overseas at a customer site and it's not possible at the moment to add more disk space to the server. It's also not possible to delete any records because the DB is in a failed mode (due to no disk space) and it doesn't respond to most commands. The Db is currently in full recovery mode which is why the LDF file is so large. This DB really doesn't need to be in full recovery so going forward we plan on switching it to simple mode which will save us a lot of space. I also don't care about losing the LDF file, but I need all of the data. I've spent a lot of time looking for a way out of this problem but everything I've found first involves either freeing up disk space or adding more disk space, neither of which is an option at this time. I'm stuck and any help would be greatly appreciated. I get the following log when trying to switch the DB to online mode. Msg 945, Level 14, State 2, Line 3 Database 'DBNAME' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. Msg 5069, Level 16, State 1, Line 3 ALTER DATABASE statement failed. Msg 1101, Level 17, State 12, Line 3 Could not allocate a new page for database 'DBNAME' because of insufficient disk space in filegroup 'DEFAULT'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. I've found the following solutions but none work due to having no disk space on that drive, and since the DB is in a failed state I can't run most commmands. - DBCC SHRINKFILE - can't be run because doing a 'use DBNAME' fails - Detaching the DB and then changing the location of the MDF/LDF files, this fails because the DB is in an offline mode so you can't run detach. I'm at a loss about what else to try. Thanks.

    Read the article

  • Email client wont connect to SMTP Authentication server

    - by Jason
    Im having trouble installing SMTH Auth for my ubuntu email server. I have followed ubuntu own guide for SMTH AUT (https://help.ubuntu.com/14.04/serverguide/postfix.html). But my email client thunderbird is giving this error " lost connection to SMTP-client 127.0.0.1." I cant add new users to thundbird either because of this connection problem. Do i have to alter any setting on my Thunderbird perhaps since ? I did try to make thunderbird use SSL for imap as well but that neither works. I restarted postfix and dovecot to find errors but both run just fine. Prior to SMTP auth changes thunderbird could connect just fine to my server and send mails. This is my main.cf file in postfix. It looks just like the one on ubuntu guide above. readme_directory = no # TLS parameters #smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = mail.mysite.com mydomain = mysite.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = $mydomain mydestination = mysite.com #relayhost = smtp.192.168.10.1.com mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.10.0/24 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all home_mailbox = Maildir/ mailbox_command = #SMTP AUTH smtpd_sasl_type = dovecot smtpd_recipient_restrictions=permit_mynetworks, permit_sasl_authenticated,reject_unauth_destination smtpd_sasl_local_domain = smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_tls_auth_only = no smtp_tls_security_level = may smtpd_tls_security_level = may smtp_tls_note_starttls_offer = yes smtpd_tls_key_file = /etc/ssl/private/smtpd.key smtpd_tls_cert_file = /etc/ssl/certs/smtpd.crt smtpd_tls_CAfile = /etc/ssl/certs/cacert.pem smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes This my dovecot configuration at 10-master.conf service imap-login { inet_listener imap { #port = 143 } inet_listener imaps { #port = 993 #ssl = yes } # Number of connections to handle before starting a new process. Typically # the only useful values are 0 (unlimited) or 1. 1 is more secure, but 0 # is faster. <doc/wiki/LoginProcess.txt> #service_count = 1 # Number of processes to always keep waiting for more connections. #process_min_avail = 0 # If you set service_count=0, you probably need to grow this. #vsz_limit = $default_vsz_limit } service pop3-login { inet_listener pop3 { #port = 110 } inet_listener pop3s { #port = 995 #ssl = yes } } service lmtp { unix_listener lmtp { #mode = 0666 } # Create inet listener only if you can't use the above UNIX socket #inet_listener lmtp { # Avoid making LMTP visible for the entire internet #address = #port = #} } service imap { # Most of the memory goes to mmap()ing files. You may need to increase this # limit if you have huge mailboxes. #vsz_limit = $default_vsz_limit # Max. number of IMAP processes (connections) #process_limit = 1024 } service pop3 { # Max. number of POP3 processes (connections) #process_limit = 1024 } service auth { unix_listener auth-userdb { #mode = 0600 #user = #group = } # Postfix smtp-auth unix_listener /var/spool/postfix/private/auth { mode = 0660 user = postfix } } service dict { # If dict proxy is used, mail processes should have access to its socket. # For example: mode=0660, group=vmail and global mail_access_groups=vmail unix_listener dict { #mode = 0600 #user = #group = } } I did add auth_mechanisms = plain login to 10-auth.conf as well.

    Read the article

  • ssh, "Last Login", `last` and OS X

    - by allentown
    I have hit the googles as much as I can on this, being specific to OS X, I am not finding an answer. Nothing is wrong, but curiosity levels are high. $ssh [email protected] Password: Last login: Wed Apr 7 21:28:03 2010 from my-laptop.local ^lonely tylenol^ Line 1 is my command line 2 is the shell asking for the password line 3 is where my question comes from line 4 comes out of /etc/motd I can find nothing in ~/ of an of the .bash* files that contains the string "Last Login", and would like to alter it. It performs some type of hostname lookup, which I can not determine. If I ssh to another host: $ssh [email protected] Last login: Wed Apr 7 21:14:51 2010 from 123-234-321-123-some.cal.isp.net.example hi there, you are on box 456 line 1 is my command line 2 is again, where my question comes from line 3 is from /etc/motd *The dash'd IP address is not reversed On this remote host, I have ~/.ssh and it's corresponding keys set up, so there was no password request Where is the "Last Login:" coming from, where does the date stamp come from, and most importantly, where does the hostname come from? While on [email protected] (box 456) $echo hostname remote.location.example456.com Or with dig, to make sure I have rDNS/PTR set up, for which I am not authoritative, but my ISP has correctly set... $dig -x 123.234.321.123 PTR remote.location.example456.com or $dig PTR 123.321.234.123.in-addr.arpa. +short remote.location.example456.com. my previous hostname used to be 123-234-321-123-some.cal.isp.net.example, which I set with hostname -s remote.location.example456.com, because it was obnoxious to see such a long name. That solves the value of $echo hostname which now returns remote.location.example456.com. Mac OS X, 10.6 is this case, does seem to honor: touch ~/.hushlogin If leave that file empty, I get nothing on the shell when I login. I want to know what controls the host resolution of the IP, and how it is all working. For example, running last reports a huge list of my logins, which have obtusely long hostnames, when they would be preferable to just be remote.location.example456.com. More confusing to me, reading the man page for wtmp and lastlog, it looks like lastlog is not used on OS X, /var/log/lastlog does not exist. Actually, none of these exist on 10.5 or 10.6: /var/run/utmp The utmp file. /var/log/wtmp The wtmp file. /var/log/lastlog The lastlog file. If I am to assume that the system is doing some kind of reverse lookup, I certainly do not know what it is, as it is not an accurate one.

    Read the article

  • Nginx Rewrite Rule For File Within Folder Not Working

    - by user3620111
    Good evening everyone or possible early morning if you are in my neck of the woods. My problem seems trivial but after several hours of testing, researching and fiddling I can't seem to get this simple nginx rewrite function to work. There are several rewrites we need, some will have multiple parameters but I cant even get this simple 1 parameter current url to alter at all to the desired. Current: website.com/public/viewpost.php?id=post-title Desired: website.com/public/post/post-title Can someone kindly point me to as what I have done wrong, I am baffled / very tired... For testing purposes before we launch we were just using a simple port on the server. Here is that section. # Listen on port 7774 for dev test server { listen 7774; server_name localhost; root /usr/share/nginx/html/paa; index index.php home.php index.html index.htm /public/index.php; location ~* /uploads/.*\.php$ { if ($request_uri ~* (^\/|\.jpg|\.png|\.gif)$ ) { break; } return 444; } location ~ \.php$ { try_files $uri @rewrite =404; fastcgi_index index.php; include fastcgi_params; fastcgi_pass php5-fpm-sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location @rewrite { rewrite ^/viewpost.php$ /post/$arg_id? permanent; } } I have tried countless attempts such as above @rewrite and simpler: location / { rewrite ^/post/(.*)$ /viewpost.php?id=$1 last; } location ~ \.php$ { try_files $uri =404; fastcgi_index index.php; include fastcgi_params; fastcgi_pass php5-fpm-sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } I can not seem to get anything to work at all, I have tried changing the location tried multiple rules... Please tell me what I have done wrong. Pause for facepalm [relocated from stack overflow as per mod suggestion]

    Read the article

  • extract data from Plist to array and dictionary

    - by Boaz
    Hi I made a plist that looks like that: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList 1.0.dtd"> <plist version="1.0"> <array> <array> <dict> <key>Company</key> <string>xxx</string> <key>Title</key> <string>VP Marketing</string> <key>Name</key> <string>Alon ddfr</string> </dict> <dict> <key>Name</key> <string>Adam Ben Shushan</string> <key>Title</key> <string>CEO</string> <key>Company</key> <string>Shushan ltd.</string> </dict> </array> <array> <dict> <key>Company</key> <string>xxx</string> <key>Title</key> <string>CTO</string> <key>Name</key> <string>Boaz frf</string> </dict> </array> </array> </plist> Now I want to extract the data like that (all the 'A' for key "Name" to one section and all the 'B' "Name" to other one): NSString *plistpath = [[NSBundle mainBundle] pathForResource:@"PeopleData" ofType:@"plist"]; NSMutableArray *attendees = [[NSMutableArray alloc] initWithContentsOfFile:plistpath]; listOfPeople = [[NSMutableArray alloc] init];//Add items NSDictionary *indexADict = [NSDictionary dictionaryWithObject:[[attendees objectAtIndex:0] objectForKey:@"Name"] forKey:@"Profiles"]; NSDictionary *indexBDict = [NSDictionary dictionaryWithObject:[[attendees objectAtIndex:1] objectForKey:@"Name"] forKey:@"Profiles"]; [listOfPeople addObject:indexADict]; [listOfPeople addObject:indexBDict]; This in order to view them in sectioned tableView. I know that the problem is here: NSDictionary *indexADict = [NSDictionary dictionaryWithObject:[[attendees objectAtIndex:0] objectForKey:@"Name"] forKey:@"Profiles"]; But I just can't figure how to do it right. Thanks.

    Read the article

  • SignalR recording when a Web Page has closed

    - by Benjamin Rogers
    I am using MassTransit request and response with SignalR. The web site makes a request to a windows service that creates a file. When the file has been created the windows service will send a response message back to the web site. The web site will open the file and make it available for the users to see. I want to handle the scenario where the user closes the web page before the file is created. In that case I want the created file to be emailed to them. Regardless of whether the user has closed the web page or not, the message handler for the response message will be run. What I want to be able to do is have some way of knowing within the response message handler that the web page has been closed. This is what I have done already. It doesnt work but it does illustrate my thinking. On the web page I have $(window).unload(function () { if (event.clientY < 0) { // $.connection.hub.stop(); $.connection.exportcreate.setIsDisconnected(); } }); exportcreate is my Hub name. In setIsDisconnected would I set a property on Caller? Lets say I successfully set a property to indicate that the web page has been closed. How do I find out that value in the response message handler. This is what it does now protected void BasicResponseHandler(BasicResponse message) { string groupName = CorrelationIdGroupName(message.CorrelationId); GetClients()[groupName].display(message.ExportGuid); } private static dynamic GetClients() { return AspNetHost.DependencyResolver.Resolve<IConnectionManager>().GetClients<ExportCreateHub>(); } I am using the message correlation id as a group. Now for me the ExportGuid on the message is very important. That is used to identify the file. So if I am going to email the created file I have to do it within the response handler because I need the ExportGuid value. If I did store a value on Caller in my hub for the web page close, how would I access it in the response handler. Just in case you need to know. display is defined on the web page as exportCreate.display = function (guid) { setTimeout(function () { top.location.href = 'GetExport.ashx?guid=' + guid; }, 500); }; GetExport.ashx opens the file and returns it as a response. Thank you, Regards Ben

    Read the article

  • Infinite loop when using fscanf

    - by user1409641
    I wrote this simple program in C, because I'm studying FILES right now at University. I take a txt file with a list of the results of the last race so my program will show the data formatted as I want. Here's my code: /* Esercizio file Motogp */ #include <stdio.h> #define SIZE 20 int main () { int pos, punt, num; float kmh; char nome[SIZE+1], cognome[SIZE+1], moto[SIZE+1]; char naz[SIZE+1], nome_file[SIZE+1]; FILE *fp; printf ("Inserisci il nome del file da aprire: "); gets (nome_file); fp = fopen (nome_file, "r"); if (fopen == NULL) printf ("Errore nell' apertura del file %s\n", nome_file); else { while (fscanf (fp, "%d %d %d %s %s %s %s %.2f", &pos, &punt, &num, nome, cognome, naz, moto, &kmh) != EOF ) { printf ("Posizione di arrivo: %d\n", pos); printf ("Punteggio: %d\n", punt); printf ("Numero pilota: %d\n", num); printf ("Nome pilota: %s\n", nome); printf ("Cognome pilota: %s\n", cognome); printf ("Nazione: %s\n", naz); printf ("Moto: %s\n", moto); printf ("Media Kmh: %d\n\n", kmh); } } fclose(fp); return 0; } and there's my txt file: 1 25 99 Jorge LORENZO SPA Yamaha 164.4 2 20 26 Dani PEDROSA SPA Honda 164.1 3 16 4 Andrea DOVIZIOSO ITA Yamaha 163.8 4 13 1 Casey STONER AUS Honda 163.8 5 11 35 Cal CRUTCHLOW GBR Yamaha 163.6 6 10 19 Alvaro BAUTISTA SPA Honda 163.5 7 9 46 Valentino ROSSI ITA Ducati 163.3 8 8 6 Stefan BRADL GER Honda 162.9 9 7 69 Nicky HAYDEN USA Ducati 162.5 10 6 11 Ben SPIES USA Yamaha 162.3 11 5 8 Hector BARBERA SPA Ducati 162.1 12 4 17 Karel ABRAHAM CZE Ducati 160.9 13 3 41 Aleix ESPARGARO SPA ART 160.2 14 2 51 Michele PIRRO ITA FTR 160.1 15 1 14 Randy DE PUNIET FRA ART 160.0 16 0 77 James ELLISON GBR ART 159.9 17 0 54 Mattia PASINI ITA ART 159.4 18 0 68 Yonny HERNANDEZ COL BQR 159.4 19 0 9 Danilo PETRUCCI ITA Ioda 158.2 20 0 22 Ivan SILVA SPA BQR 158.2 When I run my program, it return me an infinite loop of the first one. Why? Is there another function to read those data?

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • MVC Entity Framework Model not returning correct data

    - by quagland
    Hi, Run into a strange problem while writing an ASP.NET MVC site. I have a view in my SQL Server database that returns a few date ranges. The view works fine when running the query in SSMS. When the view data is returned by the Entity Framework Model, It returns the correct number of rows but some of the rows are duplicated. Here is an example of what I have done: SQL Server code: CREATE TABLE [dbo].[A]( [ID] [int] NOT NULL, [PhID] [int] NULL, [FromDate] [datetime] NULL, [ToDate] [datetime] NULL, CONSTRAINT [PK_A] PRIMARY KEY CLUSTERED ([ID] ASC)) ON [PRIMARY] go CREATE TABLE [dbo].[B]( [PhID] [int] NOT NULL, [FromDate] [datetime] NULL, [ToDate] [datetime] NULL, CONSTRAINT [PK_B] PRIMARY KEY CLUSTERED ( [PhID] ASC )) ON [PRIMARY] go CREATE VIEW C as SELECT A.ID, CASE WHEN A.PhID IS NULL THEN A.FromDate ELSE B.FromDate END AS FromDate, CASE WHEN A.PhID IS NULL THEN A.ToDate ELSE B.ToDate END AS ToDate FROM A LEFT OUTER JOIN B ON A.PhID = B.PhID go INSERT INTO B (PhID, FromDate, ToDate) VALUES (100, '20100615', '20100715') INSERT INTO A (ID, PhID, FromDate, ToDate) VALUES (1, NULL, '20100101', '20100201') INSERT INTO A (ID, PhID, FromDate, ToDate) VALUES (1, 100, '20100615', '20100715') INSERT INTO B (PhID, FromDate, ToDate) VALUES (101, '20101201', '20101231') INSERT INTO A (ID, PhID, FromDate, ToDate) VALUES (2, NULL, '20100801', '20100901') INSERT INTO A (ID, PhID, FromDate, ToDate) VALUES (2, 101, '20101201', '20101231') So now, if you select all from C, you get 4 separate date ranges In the Entity Framework Model (which I call 'Core'), the view 'C' is added. in MVC Controller: public class HomeController : Controller { public ActionResult Index() { CoreEntities db = new CoreEntities(); var clist = from c in db.C select c; return View(clist.ToList()); } } in MVC View: @model List<RM.Models.C> @{ foreach (RM.Models.C c in Model) { @String.Format("{0:dd-MMM-yyyy}", c.FromDate) <span>-</span> @String.Format("{0:dd-MMM-yyyy}", c.ToDate) <br /> } } When I run all this, it outputs this: 01-Jan-2010 - 01-Feb-2010 01-Jan-2010 - 01-Feb-2010 01-Aug-2010 - 01-Sep-2010 01-Aug-2010 - 01-Sep-2010 When it should do this (this is what the view returns): 01-Jan-2010 - 01-Feb-2010 15-Jun-2010 - 15-Jul-2010 01-Aug-2010 - 01-Sep-2010 01-Dec-2010 - 31-Dec-2010 Also, I've run the SQL profiler over it and according to that, the query being executed is: SELECT [Extent1].[ID] AS [ID], [Extent1].[FromDate] AS [FromDate], [Extent1].[ToDate] AS [ToDate] FROM (SELECT [C].[ID] AS [ID], [C].[FromDate] AS [FromDate], [C].[ToDate] AS [ToDate] FROM [dbo].[C] AS [C]) AS [Extent1] Which returns the correct data So it seems that the entity framework is doing something to the data in the meantime. To me, everything looks fine! Have I missed something? Cheers, Ben

    Read the article

  • Helping linqtosql datacontext use implicit conversion between varchar column in the database and tab

    - by user213256
    I am creating an mssql database table, "Orders", that will contain a varchar(50) field, "Value" containing a string that represents a slightly complex data type, "OrderValue". I am using a linqtosql datacontext class, which automatically types the "Value" column as a string. I gave the "OrderValue" class implicit conversion operators to and from a string, so I can easily use implicit conversion with the linqtosql classes like this: // get an order from the orders table MyDataContext db = new MyDataContext(); Order order = db.Orders(o => o.id == 1); // use implicit converstion to turn the string representation of the order // value into the complex data type. OrderValue value = order.Value; // adjust one of the fields in the complex data type value.Shipping += 10; // use implicit conversion to store the string representation of the complex // data type back in the linqtosql order object order.Value = value; // save changes db.SubmitChanges(); However, I would really like to be able to tell the linqtosql class to type this field as "OrderValue" rather than as "string". Then I would be able to avoid complex code and re-write the above as: // get an order from the orders table MyDataContext db = new MyDataContext(); Order order = db.Orders(o => o.id == 1); // The Value field is already typed as the "OrderValue" type rather than as string. // When a string value was read from the database table, it was implicity converted // to "OrderValue" type. order.Value.Shipping += 10; // save changes db.SubmitChanges(); In order to achieve this desired goal, I looked at the datacontext designer and selected the "Value" field of the "Order" table. Then, in properties, I changed "Type" to "global::MyApplication.OrderValue". The "Server Data Type" property was left as "VarChar(50) NOT NULL" The project built without errors. However, when reading from the database table, I was presented with the following error message: Could not convert from type 'System.String' to type 'MyApplication.OrderValue'. at System.Data.Linq.DBConvert.ChangeType(Object value, Type type) at Read_Order(ObjectMaterializer1 ) at System.Data.Linq.SqlClient.ObjectReaderCompiler.ObjectReader2.MoveNext() at System.Linq.Buffer1..ctor(IEnumerable1 source) at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source) at Example.OrdersProvider.GetOrders() at ... etc From the stack trace, I believe this error is happening while reading the data from the table. When presented with converting a string to my custom data type, even though the implicit conversion operators are present, the DBConvert class gets confused and throws an error. Is there anything I can do to help it not get confused and do the implicit conversion? Thanks in advance, and apologies if I have posted in the wrong forum. cheers / Ben

    Read the article

  • Our Look at the Internet Explorer 9 Platform Preview

    - by Asian Angel
    Have you been hearing all about Microsoft’s work on Internet Explorer 9 and are curious about it? If you are wanting a taste of the upcoming release then join us as we take a look at the Internet Explorer 9 Platform Preview. Note: Windows Vista and Server 2008 users may need to install a Platform Update (see link at bottom for more information). Getting Started If you are curious about the systems that the platform preview will operate on here is an excerpt from the FAQ page (link provided below). There are two important points of interest here: The platform preview does not replace your regular Internet Explorer installation The platform preview (and the final version of Internet Explorer 9) will not work on Windows XP There really is not a lot to the install process…basically all that you will have to deal with is the “EULA Window” and the “Install Finished Window”. Note: The platform preview will install to a “Program Files Folder” named “Internet Explorer Platform Preview”. Internet Explorer 9 Platform Preview in Action When you start the platform preview up for the first time you will be presented with the Internet Explorer 9 Test Drive homepage. Do not be surprised that there is not a lot to the UI at this time…but you can get a good idea of how Internet Explorer will act. Note: You will not be able to alter the “Homepage” for the platform preview. Of the four menus available there are two that will be of interest to most people…the “Page & Debug Menus”. If you go to navigate to a new webpage you will need to go through the “Page Menu” unless you have installed the Address Bar Mini-Tool (shown below). Want to see what a webpage will look like in an older version of Internet Explorer? Then choose your version in the “Debug Menu”. We did find it humorous that IE6 was excluded from the choices offered. Here is what the URL entry window looks like if you are using the “Page Menu” to navigate between websites. Here is the main page of the site here displayed in “IE9 Mode”…looking good. Here is the main page viewed in “Forced IE5 Document Mode”. There were some minor differences (colors, sidebar, etc.) in how the main page displayed in comparison to “IE9 Mode”. Being able to switch between modes makes for an interesting experience… As you can see there is not much to the “Context Menu” at the moment. Notice the slightly altered icon for the platform preview… “Add” an Address Bar of Sorts If you would like to use a “make-shift” Address Bar with the platform preview you can set up the portable file (IE9browser.exe) for the Internet Explorer 9 Test Platform Addressbar Mini-Tool. Just place it in an appropriate folder, create a shortcut for it, and it will be ready to go. Here is a close look at the left side of the Address Bar Mini-Tool. You can try to access “IE Favorites” but may have sporadic results like those we experienced during our tests. Note: The Address Bar Mini-Tool will not line up perfectly with the platform preview but still makes a nice addition. And a close look at the right side of the Address Bar Mini-Tool. In order to completely shut down the Address Bar Mini-Tool you will need to click on “Close”. Each time that you enter an address into the Address Bar Mini-Tool it will open a new window/instance of the platform preview. Note: During our tests we noticed that clicking on “Home” in the “Page Menu” opened the previously viewed website but once we closed and restarted the platform preview the test drive website was the starting/home page again. Even if the platform preview is not running the Address Bar Mini-Tool can still run as shown here. Note: You will not be able to move the Address Bar Mini-Tool from its’ locked-in position at the top of the screen. Now for some fun. With just the Address Bar Mini-Tool open you can enter an address and cause the platform preview to open. Here is our example from above now open in the platform preview…good to go. Conclusion During our tests we did experience the occasional crash but overall we were pleased with the platform preview’s performance. The platform preview handled rather well and definitely seemed much quicker than Internet Explorer 8 on our test system (a definite bonus!). If you are an early adopter then this could certainly get you in the mood for the upcoming beta releases! Links Download the Internet Explorer 9 Preview Platform Download the Internet Explorer 9 Test Platform Addressbar Mini-Tool Information about Platform Update for Windows Vista & Server 2008 View the Internet Explorer 9 Platform Preview FAQ Similar Articles Productive Geek Tips Mysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPMake Ctrl+Tab in Internet Explorer 7 Use Most Recent OrderRemove ISP Text or Corporate Branding from Internet Explorer Title BarWhy Can’t I Turn the Details/Preview Panes On or Off in Windows Vista Explorer?Prevent Firefox or Internet Explorer from Printing the URL on Every Page TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awesome Lyrics Finder for Winamp & Windows Media Player Download Videos from Hulu Pixels invade Manhattan Convert PDF files to ePub to read on your iPad Hide Your Confidential Files Inside Images Get Wildlife Photography Tips at BBC’s PhotoMasterClasses

    Read the article

  • SQL SERVER – Enumerations in Relational Database – Best Practice

    - by pinaldave
    Marko Parkkola This article has been submitted by Marko Parkkola, Data systems designer at Saarionen Oy, Finland. Marko is excellent developer and always thinking at next level. You can read his earlier comment which created very interesting discussion here: SQL SERVER- IF EXISTS(Select null from table) vs IF EXISTS(Select 1 from table). I must express my special thanks to Marko for sending this best practice for Enumerations in Relational Database. He has really wrote excellent piece here and welcome comments here. Enumerations in Relational Database This is a subject which is very basic thing in relational databases but often not very well understood and sometimes badly implemented. There are of course many ways to do this but I concentrate only two cases, one which is “the right way” and one which is definitely wrong way. The concept Let’s say we have table Person in our database. Person has properties/fields like Firstname, Lastname, Birthday and so on. Then there’s a field that tells person’s marital status and let’s name it the same way; MaritalStatus. Now MaritalStatus is an enumeration. In C# I would definitely make it an enumeration with values likes Single, InRelationship, Married, Divorced. Now here comes the problem, SQL doesn’t have enumerations. The wrong way This is, in my opinion, absolutely the wrong way to do this. It has one upside though; you’ll see the enumeration’s description instantly when you do simple SELECT query and you don’t have to deal with mysterious values. There’s plenty of downsides too and one would be database fragmentation. Consider this (I’ve left all indexes and constraints out of the query on purpose). CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] NVARCHAR(10) ) You have nvarchar(20) field in the table that tells the marital status. Obvious problem with this is that what if you create a new value which doesn’t fit into 20 characters? You’ll have to come and alter the table. There are other problems also but I’ll leave those for the reader to think about. The correct way Here’s how I’ve done this in many projects. This model still has one problem but it can be alleviated in the application layer or with CHECK constraints if you like. First I will create a namespace table which tells the name of the enumeration. I will add one row to it too. I’ll write all the indexes and constraints here too. CREATE TABLE [CodeNamespace] ( [Id] INT IDENTITY(1, 1), [Name] NVARCHAR(100) NOT NULL, CONSTRAINT [PK_CodeNamespace] PRIMARY KEY ([Id]), CONSTRAINT [IXQ_CodeNamespace_Name] UNIQUE NONCLUSTERED ([Name]) ) GO INSERT INTO [CodeNamespace] SELECT 'MaritalStatus' GO Then I create a table that holds the actual values and which reference to namespace table in order to group the values under different namespaces. I’ll add couple of rows here too. CREATE TABLE [CodeValue] ( [CodeNamespaceId] INT NOT NULL, [Value] INT NOT NULL, [Description] NVARCHAR(100) NOT NULL, [OrderBy] INT, CONSTRAINT [PK_CodeValue] PRIMARY KEY CLUSTERED ([CodeNamespaceId], [Value]), CONSTRAINT [FK_CodeValue_CodeNamespace] FOREIGN KEY ([CodeNamespaceId]) REFERENCES [CodeNamespace] ([Id]) ) GO -- 1 is the 'MaritalStatus' namespace INSERT INTO [CodeValue] SELECT 1, 1, 'Single', 1 INSERT INTO [CodeValue] SELECT 1, 2, 'In relationship', 2 INSERT INTO [CodeValue] SELECT 1, 3, 'Married', 3 INSERT INTO [CodeValue] SELECT 1, 4, 'Divorced', 4 GO Now there’s four columns in CodeValue table. CodeNamespaceId tells under which namespace values belongs to. Value tells the enumeration value which is used in Person table (I’ll show how this is done below). Description tells what the value means. You can use this, for example, column in UI’s combo box. OrderBy tells if the values needs to be ordered in some way when displayed in the UI. And here’s the Person table again now with correct columns. I’ll add one row here to show how enumerations are to be used. CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] INT ) GO INSERT INTO [Person] SELECT 'Marko', 'Parkkola', '1977-03-04', 3 GO Now I said earlier that there is one problem with this. MaritalStatus column doesn’t have any database enforced relationship to the CodeValue table so you can enter any value you like into this field. I’ve solved this problem in the application layer by selecting all the values from the CodeValue table and put them into a combobox / dropdownlist (with Value field as value and Description as text) so the end user can’t enter any illegal values; and of course I’ll check the entered value in data access layer also. I said in the “The wrong way” section that there is one benefit to it. In fact, you can have the same benefit here by using a simple view, which I schema bound so you can even index it if you like. CREATE VIEW [dbo].[Person_v] WITH SCHEMABINDING AS SELECT p.[Firstname], p.[Lastname], p.[BirthDay], c.[Description] MaritalStatus FROM [dbo].[Person] p JOIN [dbo].[CodeValue] c ON p.[MaritalStatus] = c.[Value] JOIN [dbo].[CodeNamespace] n ON n.[Id] = c.[CodeNamespaceId] AND n.[Name] = 'MaritalStatus' GO -- Select from View SELECT * FROM [dbo].[Person_v] GO This is excellent write up byMarko Parkkola. Do you have this kind of design setup at your organization? Let us know your opinion. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Database, DBA, Readers Contribution, Software Development, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • C#: System.Lazy&lt;T&gt; and the Singleton Design Pattern

    - by James Michael Hare
    So we've all coded a Singleton at one time or another.  It's a really simple pattern and can be a slightly more elegant alternative to global variables.  Make no mistake, Singletons can be abused and are often over-used -- but occasionally you find a Singleton is the most elegant solution. For those of you not familiar with a Singleton, the basic Design Pattern is that a Singleton class is one where there is only ever one instance of the class created.  This means that constructors must be private to avoid users creating their own instances, and a static property (or method in languages without properties) is defined that returns a single static instance. 1: public class Singleton 2: { 3: // the single instance is defined in a static field 4: private static readonly Singleton _instance = new Singleton(); 5:  6: // constructor private so users can't instantiate on their own 7: private Singleton() 8: { 9: } 10:  11: // read-only property that returns the static field 12: public static Singleton Instance 13: { 14: get 15: { 16: return _instance; 17: } 18: } 19: } This is the most basic singleton, notice the key features: Static readonly field that contains the one and only instance. Constructor is private so it can only be called by the class itself. Static property that returns the single instance. Looks like it satisfies, right?  There's just one (potential) problem.  C# gives you no guarantee of when the static field _instance will be created.  This is because the C# standard simply states that classes (which are marked in the IL as BeforeFieldInit) can have their static fields initialized any time before the field is accessed.  This means that they may be initialized on first use, they may be initialized at some other time before, you can't be sure when. So what if you want to guarantee your instance is truly lazy.  That is, that it is only created on first call to Instance?  Well, there's a few ways to do this.  First we'll show the old ways, and then talk about how .Net 4.0's new System.Lazy<T> type can help make the lazy-Singleton cleaner. Obviously, we could take on the lazy construction ourselves, but being that our Singleton may be accessed by many different threads, we'd need to lock it down. 1: public class LazySingleton1 2: { 3: // lock for thread-safety laziness 4: private static readonly object _mutex = new object(); 5:  6: // static field to hold single instance 7: private static LazySingleton1 _instance = null; 8:  9: // property that does some locking and then creates on first call 10: public static LazySingleton1 Instance 11: { 12: get 13: { 14: if (_instance == null) 15: { 16: lock (_mutex) 17: { 18: if (_instance == null) 19: { 20: _instance = new LazySingleton1(); 21: } 22: } 23: } 24:  25: return _instance; 26: } 27: } 28:  29: private LazySingleton1() 30: { 31: } 32: } This is a standard double-check algorithm so that you don't lock if the instance has already been created.  However, because it's possible two threads can go through the first if at the same time the first time back in, you need to check again after the lock is acquired to avoid creating two instances. Pretty straightforward, but ugly as all heck.  Well, you could also take advantage of the C# standard's BeforeFieldInit and define your class with a static constructor.  It need not have a body, just the presence of the static constructor will remove the BeforeFieldInit attribute on the class and guarantee that no fields are initialized until the first static field, property, or method is called.   1: public class LazySingleton2 2: { 3: // because of the static constructor, this won't get created until first use 4: private static readonly LazySingleton2 _instance = new LazySingleton2(); 5:  6: // Returns the singleton instance using lazy-instantiation 7: public static LazySingleton2 Instance 8: { 9: get { return _instance; } 10: } 11:  12: // private to prevent direct instantiation 13: private LazySingleton2() 14: { 15: } 16:  17: // removes BeforeFieldInit on class so static fields not 18: // initialized before they are used 19: static LazySingleton2() 20: { 21: } 22: } Now, while this works perfectly, I hate it.  Why?  Because it's relying on a non-obvious trick of the IL to guarantee laziness.  Just looking at this code, you'd have no idea that it's doing what it's doing.  Worse yet, you may decide that the empty static constructor serves no purpose and delete it (which removes your lazy guarantee).  Worse-worse yet, they may alter the rules around BeforeFieldInit in the future which could change this. So, what do I propose instead?  .Net 4.0 adds the System.Lazy type which guarantees thread-safe lazy-construction.  Using System.Lazy<T>, we get: 1: public class LazySingleton3 2: { 3: // static holder for instance, need to use lambda to construct since constructor private 4: private static readonly Lazy<LazySingleton3> _instance 5: = new Lazy<LazySingleton3>(() => new LazySingleton3()); 6:  7: // private to prevent direct instantiation. 8: private LazySingleton3() 9: { 10: } 11:  12: // accessor for instance 13: public static LazySingleton3 Instance 14: { 15: get 16: { 17: return _instance.Value; 18: } 19: } 20: } Note, you need your lambda to call the private constructor as Lazy's default constructor can only call public constructors of the type passed in (which we can't have by definition of a Singleton).  But, because the lambda is defined inside our type, it has access to the private members so it's perfect. Note how the Lazy<T> makes it obvious what you're doing (lazy construction), instead of relying on an IL generation side-effect.  This way, it's more maintainable.  Lazy<T> has many other uses as well, obviously, but I really love how elegant and readable it makes the lazy Singleton.

    Read the article

  • Enable Automatic Code First Migrations On SQL Database in Azure Web Sites

    - by Steve Michelotti
    Now that Azure supports .NET Framework 4.5, you can use all the latest and greatest available features. A common scenario is to be able to use Entity Framework Code First Migrations with a SQL Database in Azure. Prior to Code First Migrations, Entity Framework provided database initializers. While convenient for demos and prototypes, database initializers weren’t useful for much beyond that because, if you delete and re-create your entire database when the schema changes, you lose all of your operational data. This is the void that Migrations are meant to fill. For example, if you add a column to your model, Migrations will alter the database to add the column rather than blowing away the entire database and re-creating it from scratch. Azure is becoming increasingly easier to use – especially with features like Azure Web Sites. Being able to use Entity Framework Migrations in Azure makes deployment easier than ever. In this blog post, I’ll walk through enabling Automatic Code First Migrations on Azure. I’ll use the Simple Membership provider for my example. First, we’ll create a new Azure Web site called “migrationstest” including creating a new SQL Database along with it:   Next we’ll go to the web site and download the publish profile:   In the meantime, we’ve created a new MVC 4 website in Visual Studio 2012 using the “Internet Application” template. This template is automatically configured to use the Simple Membership provider. We’ll do our initial Publish to Azure by right-clicking our project and selecting “Publish…”. From the “Publish Web” dialog, we’ll import the publish profile that we downloaded in the previous step:   Once the site is published, we’ll just click the “Register” link from the default site. Since the AccountController is decorated with the [InitializeSimpleMembership] attribute, the initializer will be called and the initial database is created.   We can verify this by connecting to our SQL Database on Azure with SQL Management Studio (after making sure that our local IP address is added to the list of Allowed IP Addresses in Azure): One interesting note is that these tables got created with the default Entity Framework initializer – which is to create the database if it doesn’t already exist. However, our database did already exist! This is because there is a new feature of Entity Framework 5 where Code First will add tables to an existing database as long as the target database doesn’t contain any of the tables from the model. At this point, it’s time to enable Migrations. We’ll open the Package Manger Console and execute the command: PM> Enable-Migrations -EnableAutomaticMigrations This will enable automatic migrations for our project. Because we used the "-EnableAutomaticMigrations” switch, it will create our Configuration class with a constructor that sets the AutomaticMigrationsEnabled property set to true: 1: public Configuration() 2: { 3: AutomaticMigrationsEnabled = true; 4: } We’ll now add our initial migration: PM> Add-Migration Initial This will create a migration class call “Initial” that contains the entire model. But we need to remove all of this code because our database already exists so we are just left with empty Up() and Down() methods. 1: public partial class Initial : DbMigration 2: { 3: public override void Up() 4: { 5: } 6: 7: public override void Down() 8: { 9: } 10: } If we don’t remove this code, we’ll get an exception the first time we attempt to run migrations that tells us: “There is already an object named 'UserProfile' in the database”. This blog post by Julie Lerman fully describes this scenario (i.e., enabling migrations on an existing database). Our next step is to add the Entity Framework initializer that will automatically use Migrations to update the database to the latest version. We will add these 2 lines of code to the Application_Start of the Global.asax: 1: Database.SetInitializer(new MigrateDatabaseToLatestVersion<UsersContext, Configuration>()); 2: new UsersContext().Database.Initialize(false); Note the Initialize() call will force the initializer to run if it has not been run before. At this point, we can publish again to make sure everything is still working as we are expecting. This time we’re going to specify in our publish profile that Code First Migrations should be executed:   Once we have re-published we can once again navigate to the Register page. At this point the database has not been changed but Migrations is now enabled on our SQL Database in Azure. We can now customize our model. Let’s add 2 new properties to the UserProfile class – Email and DateOfBirth: 1: [Table("UserProfile")] 2: public class UserProfile 3: { 4: [Key] 5: [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] 6: public int UserId { get; set; } 7: public string UserName { get; set; } 8: public string Email { get; set; } 9: public DateTime DateOfBirth { get; set; } 10: } At this point all we need to do is simply re-publish. We’ll once again navigate to the Registration page and, because we had Automatic Migrations enabled, the database has been altered (*not* recreated) to add our 2 new columns. We can verify this by once again looking at SQL Management Studio:   Automatic Migrations provide a quick and easy way to keep your database in sync with your model without the worry of having to re-create your entire database and lose data. With Azure Web Sites you can set up automatic deployment with Git or TFS and automate the entire process to make it dead simple.

    Read the article

  • Adopting DBVCS

    - by Wes McClure
    Identify early adopters Pick a small project with a small(ish) team.  This can be a legacy application or a green-field application. Strive to find a team of early adopters that will be eager to try something new. Get the team on board! Research Research the tool(s) that you want to use.  Some tools provide all of the features you would need while some only provide a slice of the pie.  DBVCS requires the ability to manage a set of change scripts that update a database from one version to the next.  Ideally a tool can track database versions and automatically apply updates.  The change script generation process can be manual, but having diff tools available to automatically generate it can really reduce the overhead to adoption.  Finally, an automated tool to generate a script file per database object is an added bonus as your version control system can quickly identify what was changed in a commit (add/del/modify), just like with code changes. Don’t settle on just one tool, identify several.  Then work with the team to evaluate the tools.  Have the team do some tests of the following scenarios with each tool: Baseline an existing database: can the migration tool work with legacy databases?  Caution: most migration platforms do not support baselines or have poor support, especially the fad of fluent APIs. Add/drop tables Add/drop procedures/functions/views Alter tables (rename columns, add columns, remove columns) Massage data – migrations sometimes involve changing data types that cannot be implicitly casted and require you to decide how the data is explicitly cast to the new type.  This is a requirement for a migrations platform.  Think about a case where you might want to combine fields, or move a field from one table to another, you wouldn’t want to lose the data. Run the tool via the command line.  If you cannot automate the tool in Continuous Integration what is the point? Create a copy of a database on demand. Backup/restore databases locally. Let the team give feedback and decide together, what tool they would like to try out. My recommendation at this point would be to include TSqlMigrations and RoundHouse as SQL based migration platforms.  In general I would recommend staying away from the fluent platforms as they often lack baseline capabilities and add overhead to learn a new API when SQL is already a very well known DSL.  Code migrations often get messy with procedures/views/functions as these have to be created with SQL and aren’t cross platform anyways.  IMO stick to SQL based migrations. Reconciling Production If your project is a legacy application, you will need to reconcile the current state of production with your development databases.  Find changes in production and bring them down to development, even if they are old and need to be removed.  Once complete, produce a baseline of either dev or prod as they are now in sync.  Commit this to your VCS of choice. Add whatever schema changes tracking mechanism your tool requires to your development database.  This often requires adding a table to track the schema version of that database.  Your tool should support doing this for you.  You can add this table to production when you do your next release. Script out any changes currently in dev.  Remove production artifacts that you brought down during reconciliation.  Add change scripts for any outstanding changes in dev since the last production release.  Commit these to your repository.   Say No to Shared Dev DBs Simply put, you wouldn’t dream of sharing a code checkout, why would you share a development database?  If you have a shared dev database, back it up, distribute the backups and take the shared version offline (including the dev db server once all projects are using DB VCS).  Doing DB VCS with a shared database is bound to cause problems as people won’t be able to easily script out their own changes from those that others are working on.   First prod release Copy prod to your beta/testing environment.  Add the schema changes table (or mechanism) and do a test run of your changes.  If successful you can schedule this to be run on production.   Evaluation After your first release, evaluate the pain points of the process.  Try to find tools or modifications to existing tools to help fix them.  Don’t leave stones unturned, iteratively evolve your tools and practices to make the process as seamless as possible.  This is why I suggest open source alternatives.  Nothing is set in stone, a good example was adding transactional support to TSqlMigrations.  We ran into situations where an update would break a database, so I added a feature to do transactional updates and rollback on errors!  Another good example is generating change scripts.  We have been manually making these for months now.  I found an open source project called Open DB Diff and integrated this with TSqlMigrations.  These were things we just accepted at the time when we began adopting our tool set.  Once we became comfortable with the base functionality, it was time to start automating more of the process.  Just like anything else with development, never be afraid to try to find tools to make your job easier!   Enjoy -Wes

    Read the article

  • Replication Services in a BI environment

    - by jorg
    In this blog post I will explain the principles of SQL Server Replication Services without too much detail and I will take a look on the BI capabilities that Replication Services could offer in my opinion. SQL Server Replication Services provides tools to copy and distribute database objects from one database system to another and maintain consistency afterwards. These tools basically copy or synchronize data with little or no transformations, they do not offer capabilities to transform data or apply business rules, like ETL tools do. The only “transformations” Replication Services offers is to filter records or columns out of your data set. You can achieve this by selecting the desired columns of a table and/or by using WHERE statements like this: SELECT <published_columns> FROM [Table] WHERE [DateTime] >= getdate() - 60 There are three types of replication: Transactional Replication This type replicates data on a transactional level. The Log Reader Agent reads directly on the transaction log of the source database (Publisher) and clones the transactions to the Distribution Database (Distributor), this database acts as a queue for the destination database (Subscriber). Next, the Distribution Agent moves the cloned transactions that are stored in the Distribution Database to the Subscriber. The Distribution Agent can either run at scheduled intervals or continuously which offers near real-time replication of data! So for example when a user executes an UPDATE statement on one or multiple records in the publisher database, this transaction (not the data itself) is copied to the distribution database and is then also executed on the subscriber. When the Distribution Agent is set to run continuously this process runs all the time and transactions on the publisher are replicated in small batches (near real-time), when it runs on scheduled intervals it executes larger batches of transactions, but the idea is the same. Snapshot Replication This type of replication makes an initial copy of database objects that need to be replicated, this includes the schemas and the data itself. All types of replication must start with a snapshot of the database objects from the Publisher to initialize the Subscriber. Transactional replication need an initial snapshot of the replicated publisher tables/objects to run its cloned transactions on and maintain consistency. The Snapshot Agent copies the schemas of the tables that will be replicated to files that will be stored in the Snapshot Folder which is a normal folder on the file system. When all the schemas are ready, the data itself will be copied from the Publisher to the snapshot folder. The snapshot is generated as a set of bulk copy program (BCP) files. Next, the Distribution Agent moves the snapshot to the Subscriber, if necessary it applies schema changes first and copies the data itself afterwards. The application of schema changes to the Subscriber is a nice feature, when you change the schema of the Publisher with, for example, an ALTER TABLE statement, that change is propagated by default to the Subscriber(s). Merge Replication Merge replication is typically used in server-to-client environments, for example when subscribers need to receive data, make changes offline, and later synchronize changes with the Publisher and other Subscribers, like with mobile devices that need to synchronize one in a while. Because I don’t really see BI capabilities here, I will not explain this type of replication any further. Replication Services in a BI environment Transactional Replication can be very useful in BI environments. In my opinion you never want to see users to run custom (SSRS) reports or PowerPivot solutions directly on your production database, it can slow down the system and can cause deadlocks in the database which can cause errors. Transactional Replication can offer a read-only, near real-time database for reporting purposes with minimal overhead on the source system. Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #005

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 SQL SERVER – Cursor to Kill All Process in Database I indeed wrote this cursor and when I often look back, I wonder how naive I was to write this. The reason for writing this cursor was to free up my database from any existing connection so I can do database operation. This worked fine but there can be a potentially big issue if there was any important transaction was killed by this process. There is another way to to achieve the same thing where we can use ALTER syntax to take database in single user mode. Read more about that over here and here. 2007 Rules of Third Normal Form and Normalization Advantage – 3NF The rules of 3NF are mentioned here Make a separate table for each set of related attributes, and give each table a primary key. If an attribute depends on only part of a multi-valued key, remove it to a separate table If attributes do not contribute to a description of the key, remove them to a separate table. Correct Syntax for Stored Procedure SP Sometime a simple question is the most important question. I often see in industry incorrectly written Stored Procedure. Few writes code after the most outer BEGIN…END and few writes code after the GO Statement. In this brief blog post, I have attempted to explain the same. 2008 Switch Between Result Pan and Query Pan – SQL Shortcut Many times when I am writing query I have to scroll the result displayed in the result set. Most of the developer uses the mouse to switch between and Query Pane and Result Pane. There are few developers who are crazy about Keyboard shortcuts. F6 is the keyword which can be used to switch between query pane and tabs of the result pane. Interesting Observation – Use of Index and Execution Plan Query Optimization is a complex game and it has its own rules. From the example in the article we have discovered that Query Optimizer does not use clustered index to retrieve data, sometime non clustered index provides optimal performance for retrieving Primary Key. When all the rows and columns are selected Primary Key should be used to select data as it provides optimal performance. 2009 Interesting Observation – TOP 100 PERCENT and ORDER BY If you pull up any application or system where there are more than 100 SQL Server Views are created – I am very confident that at one or two places you will notice the scenario wherein View the ORDER BY clause is used with TOP 100 PERCENT. SQL Server 2008 VIEW with ORDER BY clause does not throw an error; moreover, it does not acknowledge the presence of it as well. In this article we have taken three perfect examples and demonstrated which clause we should use when. Comma Separated Values (CSV) from Table Column A Very common question – How to create comma separated values from a table in the database? The answer is also very common if we use XML. Check out this article for quick learning on the same subject. Azure Start Guide – Step by Step Installation Guide Though Azure portal has changed a quite bit since I wrote this article, the concept used in this article are not old. They are still valid and many of the functions are still working as mentioned in the article. I believe this one article will put you on the track to use Azure! Size of Index Table for Each Index – Solution Earlier I have posted a small question on this blog and requested help from readers to participate here and provide a solution. The puzzle was to write a query that will return the size for each index that is on any particular table. We need a query that will return an additional column in the above listed query and it should contain the size of the index. This article presents two of the best solutions from the puzzle. 2010 Well, this week in 2010 was the week of puzzles as I posted three interesting puzzles. Till today I am noticing pretty good interesting in the puzzles. They are tricky but for sure brings a great value if you are a database developer for a long time. I suggest you go over this puzzles and their answers. Did you really know all of the answers? I am confident that reading following three blog post will for sure help you enhance the experience with T-SQL. SQL SERVER – Challenge – Puzzle – Usage of FAST Hint SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal SQL SERVER – Challenge – Puzzle – Why does RIGHT JOIN Exists 2011 DVM sys.dm_os_sys_info Column Name Changed in SQL Server 2012 Have you ever faced a situation where something does not work? When you try to fix it - you enjoy fixing it and started to appreciate the breaking changes. Well, this was exactly I felt yesterday. Before I begin my story, I want to candidly state that I do not encourage anybody to use * in the SELECT statement. Now the disclaimer is over – I suggest you read the original story – you will love it! Get Directory Structure using Extended Stored Procedure xp_dirtree Here is the question to you – why would you do something in SQL Server where you can do the same task in command prompt much easily. Well, the answer is sometime there are real use cases when we have to do such thing. This is a similar example where I have demonstrated how in SQL Server 2012 we can use extended stored procedure to retrieve directory structure. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • MySQL Connector/Net 6.4.6 Maintenance Release has been released

    - by fernando
    MySQL Connector/Net 6.4.6, a new version of the all-managed .NET driver for MySQL has been released.  This is a maintenance release and is recommended for use in production environments. It is appropriate for use with MySQL server versions 5.0-5.6. This is intended to be the final release for Connector/NET 6.4. It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.4.6 version of MySQL Connector/Net brings the following fixes: - Fix for List.Contains generates a bunch of ORs instead of more efficient IN clause in   LINQ to Entities (Oracle bug #14016344, MySql bug #64934). - Fix for error when trying to change the name of an Index on the Indexes/Keys editor; along with this fix now users can change the Index type of a new Index which could not be done   in previous versions, and when changing the Index name the change is reflected on the list view at the left side of the Index/Keys editor (Oracle bug #13613801). - Fix for stored procedure call using only its name with EF code first (MySql bug #64999, Oracle bug #14008699). - Fix for performance issue in generated EF query: .NET StartsWith/Contains/EndsWith produces MySql's locate instead of Like (MySql bug #64935, Oracle bug #14009363). - Fix for script generated for code first contains wrong alter table and wrong declaration for byte[] (MySql bug #64216, Oracle bug #13900091). - Fix for Exception thrown when using cascade delete in an EDM Model-First in Entity Framework (Oracle bug #14008752, MySql bug #64779). - Fix for Session locking issue with MySqlSessionStateStore (MySql bug #63997, Oracble bug #13733054). - Fixed deleting a user profile using Profile provider (MySQL bug #64409, Oracle bug #13790123). - Fix for bug Cannot Create an Entity with a Key of Type String (MySQL bug #65289, Oracle bug #14540202). This fix checks if the type has a FixedLength facet set in order to create a char otherwise should create varchar, mediumtext or longtext types when using a String CLR type in Code First or Model First also tested in Database First. Unit tests added for Code First and ProviderManifest. - Fix for bug "CacheServerProperties can cause 'Packet too large' error" (MySQL Bug #66578 Orabug #14593547). - Fix for handling unnamed parameter in MySQLCommand. This fix allows the mysqlcommand to handle parameters without requiring naming (e.g. INSERT INTO Test (id,name) VALUES (?, ?) ) (MySQL Bug #66060, Oracle bug #14499549). - Fixed inheritance on Entity Framework Code First scenarios. Discriminator column is created using its correct type as varchar(128) (MySql bug #63920 and Oracle bug #13582335). - Fixed "Trying to customize column precision in Code First does not work" (MySql bug #65001, Oracle bug #14469048). - Fixed bug ASP.NET Membership database fails on MySql database UTF32 (MySQL bug #65144, Oracle bug #14495292). - Fix for MySqlCommand.LastInsertedId holding only 32 bit values (MySql bug #65452, Oracle bug #14171960) by changing   several internal declaration of lastinsertid from int to long. - Fixed "Decimal type should have digits at right of decimal point", now default is 2, but user's changes in   EDM designer are recognized (MySql bug #65127, Oracle bug #14474342). - Fix for NullReferenceException when saving an uninitialized row in Entity Framework (MySql bug #66066, Oracle bug #14479715). - Fix for error when calling RoleProvider.RemoveUserFromRole(): causes an exception due to a wrong table being used (MySql bug #65805, Oracle bug #14405338). - Fix for "Memory Leak on MySql.Data.MySqlClient.MySqlCommand", too many MemoryStream's instances created (MySql bug #65696, Oracle bug #14468204). - Small improvement on MySqlPoolManager CleanIdleConnections for better mysqlpoolmanager idlecleanuptimer at startup (MySql bug #66472 and Oracle bug #14652624). - Fix for bug TIMESTAMP values are mistakenly represented as DateTime with Kind = Local (Mysql bug #66964, Oracle bug #14740705). - Fix for bug Keyword not supported. Parameter name: AttachDbFilename (Mysql bug #66880, Oracle bug #14733472). - Added support to MySql script file to retrieve data when using "SHOW" statements. - Fix for Package Load Failure in Visual Studio 2005 (MySql bug #63073, Oracle bug #13491674). - Fix for bug "Unable to connect using IPv6 connections" (MySQL bug #67253, Oracle bug #14835718). - Added auto-generated values for Guid identity columns (MySql bug #67450, Oracle bug #15834176). - Fix for method FirstOrDefault not supported in some LINQ to Entities queries (MySql bug #67377, Oracle bug #15856964). The release is available to download at http://dev.mysql.com/downloads/connector/net/6.4.html Documentation ------------------------------------- You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • Creating Visual Studio projects that only contain static files

    - by Eilon
    Have you ever wanted to create a Visual Studio project that only contained static files and didn’t contain any code? While working on ASP.NET MVC we had a need for exactly this type of project. Most of the projects in the ASP.NET MVC solution contain code, such as managed code (C#), unit test libraries (C#), and Script# code for generating our JavaScript code. However, one of the projects, MvcFuturesFiles, contains no code at all. It only contains static files that get copied to the build output folder: As you may well know, adding static files to an existing Visual Studio project is easy. Just add the file to the project and in the property grid set its Build Action to “Content” and the Copy to Output Directory to “Copy if newer.” This works great if you have just a few static files that go along with other code that gets compiled into an executable (EXE, DLL, etc.). But this solution does not work well if the projects only contains static files and has no compiled code. If you create a new project in Visual Studio and add static files to it you’ll still get an EXE or DLL copied to the output folder, despite not having any actual code. We wanted to avoid having a teeny little DLL generated in the output folder. In ASP.NET MVC 2 we came up with a simple solution to this problem. We started out with a regular C# Class Library project but then edited the project file to alter how it gets built. The critical part to get this to work is to define the MSBuild targets for Build, Clean, and Rebuild to perform custom tasks instead of running the compiler. The Build, Clean, and Rebuild targets are the three main targets that Visual Studio requires in every project so that the normal UI functions properly. If they are not defined then running certain commands in Visual Studio’s Build menu will cause errors. Once you create the class library projects there are a few easy steps to change it into a static file project: The first step in editing the csproj file is to remove the reference to the Microsoft.CSharp.targets file because the project doesn’t contain any C# code: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The second step is to define the new Build, Clean, and Rebuild targets to delete and then copy the content files: <Target Name="Build"> <Copy SourceFiles="@(Content)" DestinationFiles="@(Content->'$(OutputPath)%(RelativeDir)%(Filename)%(Extension)')" /> </Target> <Target Name="Clean"> <Exec Command="rd /s /q $(OutputPath)" Condition="Exists($(OutputPath))" /> </Target> <Target Name="Rebuild" DependsOnTargets="Clean;Build"> </Target> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The third and last step is to add all the files to the project as normal Content files (as you would do in any project type). To see how we did this in the ASP.NET MVC 2 project you can download the source code and inspect the MvcFutureFules.csproj project file. If you’re working on a project that contains many static files I hope this solution helps you out!

    Read the article

  • Migrating SQL Server Databases – The DBA’s Checklist (Part 2)

    - by Sadequl Hussain
    Continuing from Part 1  , our Migration Checklist continues: Step 5: Update statistics It is always a good idea to update the statistics of the database that you have just installed or migrated. To do this, run the following command against the target database: sp_updatestats The sp_updatestats system stored procedure runs the UPDATE STATISTICS command against every user and system table in the database.  However, a word of caution: running the sp_updatestats against a database with a compatibility level below 90 (SQL Server 2005) will reset the automatic UPDATE STATISTICS settings for every index and statistics of every table in the database. You may therefore want to change the compatibility mode before you run the command. Another thing you should remember to do is to ensure the new database has its AUTO_CREATE_STATISTICS and AUTO_UPDATE_STATISTICS properties set to ON. You can do so using the ALTER DATABASE command or from the SSMS. Step 6: Set database options You may have to change the state of a database after it has been restored. If the database was changed to single-user or read-only mode before backup, the restored copy will also retain these settings. This may not be an issue when you are manually restoring from Enterprise Manager or the Management Studio since you can change the properties. However, this is something to be mindful of if the restore process is invoked by an automated job or script and the database needs to be written to immediately after restore. You may want to check the database’s status programmatically in such cases. Another important option you may want to set for the newly restored / attached database is PAGE_VERIFY. This option specifies how you want SQL Server to ensure the physical integrity of the data. It is a new option from SQL Server 2005 and can have three values: CHECKSUM (default for SQL Server 2005 and latter databases), TORN_PAGE_DETECTION (default when restoring a pre-SQL Server 2005 database) or NONE. Torn page detection was itself an option for SQL Server 2000 databases. From SQL Server 2005, when PAGE_VERIFY is set to CHECKSUM, the database engine calculates the checksum for a page’s contents and writes it to the page header before storing it in disk. When the page is read from the disk, the checksum is computed again and compared with the checksum stored in the header.  Torn page detection works much like the same way in that it stores a bit in the page header for every 512 byte sector. When data is read from the page, the torn page bits stored in the header is compared with the respective sector contents. When PAGE_VERIFY is set to NONE, SQL Server does not perform any checking, even if torn page data or checksums are present in the page header.  This may not be something you would want to set unless there is a very specific reason.  Microsoft suggests using the CHECKSUM page verify option as this offers more protection. Step 7: Map database users to logins A common database migration issue is related to user access. Windows and SQL Server native logins that existed in the source instance and had access to the database may not be present in the destination. Even if the logins exist in the destination, the mapping between the user accounts and the logins will not be automatic. You can use a special system stored procedure called sp_change_users_login to address these situations. The procedure needs to be run against the newly attached or restored database and can accept four parameters. Depending on what you want to do, you may be using less than four though. The first parameter, @Action, can take three values. When you specify @Action = ‘Report’, the system will provide you with a list of database users which are not mapped to any login. If you want to map a database user to an existing SQL Server login, the value for @Action will be ‘Update_One’. In this case, you will only need to provide the database user name and the login it will map to. So if your newly restored database has a user account called “bob” and there is already a SQL Server login with the same name and you want to map the user to the login, you will execute a query like the following: sp_change_users_login         @Action = ‘Update_One’,         @UserNamePattern = ‘bob’,         @LoginName = ‘bob’ If the login does not exist, you can instruct SQL Server to create the login with the same name. In this case you will need to provide a password for the login and the value of the @Action parameter will be ‘Auto_Fix’. If the login already exists, it will be automatically mapped to the user account. Unfortunately sp_change_users_login system stored procedure cannot be used to map database users to trusted logins (Windows accounts) in SQL Server. You will need to follow a manual process to re-map the database user accounts.  Continues…

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >