Search Results

Search found 1399 results on 56 pages for 'naming'.

Page 49/56 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • C#/DataSet: Create and bind a custom column/property in a DataSet`s DataTable in the XXXDataSet.cs

    - by msfanboy
    Hello, I have a DataSet with a DataTable having the columns Number and Description. I do not want to bind both properties to a BindingSource bound again to 2 controls. What I want is a 3rd column in the DataTable called NumberDescription which is a composition of Number and Description. This property is bound only to 1 control/BindingSource`s property not 2. There is the partial XXXDataSet.Designer.cs file and the partial XXXDataSet.cs file. Of course I have defined the public string NumberDescription {get (doing some checks here;set (doing also some checks here} in the XXXDataSet.cs file. But all this does not bind my new property/Column to the BindingSource the DataTable is bound to because the DataSet does not know the new column/property. To make this new property/column known I could add a new column to the DataTAble in the DataSet designer view naming it NumberDescription. At least I see know the new property in the listing of the BindingSource so I can choose it. But all that did not help?? So how do I do that stuff properly? Should I call the NumberDescription Property in the Number AND Description Property ?

    Read the article

  • How would I sort files to directories based on filenames?

    - by gnomed
    I have a huge number of files to sort all named in some terrible convention. Here are some examples: (4)_mr__mcloughlin____.txt 12__sir_john_farr____.txt (b)mr__chope____.txt dame_elaine_kellett-bowman____.txt dr__blackburn______.txt These names are supposed to be a different person (speaker) each. Someone in another IT department produced these from a ton of XML files using some script but the naming is unfathomably stupid as you can see. I need to sort literally tens of thousands of these files with multiple files of text for each person; each with something stupid making the filename different, be it more underscores or some random number. They need to be sorted by speaker. This would be easier with a script to do most of the work then I could just go back and merge folders that should be under the same name or whatever. There are a number of ways I was thinking about doing this. parse the names from each file and sort them into folders for each unique name. get a list of all the unique names from the filenames, then look through this simplified list of unique names for similar ones and ask me whether they are the same, and once it has determined this it will sort them all accordingly. I plan on using Perl, but I can try a new language if it's worth it. I'm not sure how to go about reading in each filename in a directory one at a time into a string for parsing into an actual name. I'm not completely sure how to parse with regex in perl either, but that might be googleable. For the sorting, I was just gonna use the shell command: `cp filename.txt /example/destination/filename.txt` but just cause that's all I know so it's easiest. I dont even have a pseudocode idea of what im going to do either so if someone knows the best sequence of actions, im all ears. I guess I am looking for a lot of help, I am open to any suggestions. Many many many thanks to anyone who can help. B.

    Read the article

  • What is good about php/what is php good for?

    - by Roman A. Taycher
    I have often seen php bashed around the webs as a loosely typed(loose typing as in a lot of type coercion and/or easy(and perhaps common) to cast object all over not dynamic typing) language without a great compiler/interpreter/vm, with even the standard library using a number of different naming conventions. A lot of people complain about perl but many (including a lot of the complainers) also give it a lot of credit for its regexes and general flexibility and power. Other then legacy code , giant web frameworks that can do tons(drupal,ect.), and easy cheap hosting what is good about php (,also what criticism are unfair, and how is the language evolving to overcome its problems). Why would i want to learn it? why would I want to do an independent project in it? The main thing I have heard is that its php codes simplicity is sometimes easier then the over-engineered complexity you find in certain Java frameworks and applications. I'm not just trolling, i'm genuinly curious what makes php programmers use it. try to convince me to put it on my languages to dabble in and languages to learn more in depth lists.

    Read the article

  • Batch file recursively find files and rar them

    - by b1gf00t
    Hi there, I have a Parent Directory which hosts many sub directories, and in every sub directory there is .mpg movies. Some of the directories might contain one or more .mpg movies. I would like to automate the process below, which I have been doing manually. Step One If the directory has more than 1 .mpg file, I create separates directories for each and move each file into its directory, naming the directory as per the name of the file. Step Two I rar each video file in its directory as per one of my profiles, by that it splits the movie into 50MB parts, test the archive, delete the source, and instructs winrar to wait if another rar is executing. I am doing this so I can queue jobs manually. Step Three After having all the rars in the sub directories, I start creating a checksum for every directory, therefore leaving checksum.sfv in every directory. Step Four I copy the parent folder and its sub directories to my external drives. I was hoping that someone could assist me in creating a script. I was able to automate the process of creating directories as per the name of the file, and moving the file. However, I never succeeded in automating Step two. I am using the below software Winrar from rarlabs exf from exactfile Appreciate your assistance.

    Read the article

  • Mock implementations in C++

    - by forneo
    Hi guys, I need a mock implementation of a class - for testing purposes - and I'm wondering how I should best go about doing that. I can think of two general ways: Create an interface that contains all public functions of the class as pure virtual functions, then create a mock class by deriving from it. Mark all functions (well, at least all that are to be mocked) as virtual. I'm used to doing it the first way in Java, and it's quite common too (probably since they have a dedicated interface type). But I've hardly ever seen such interface-heavy designs in C++, thus I'm wondering. The second way will probably work, but I can't help but think of it as kind of ugly. Is anybody doing that? If I follow the first way, I need some naming assistance. I have an audio system that is responsible for loading sound files and playing the loaded tracks. I'm using OpenAL for that, thus I've called the interface "Audio" and the implementation "OpenALAudio". However, this implies that all OpenAL-specific code has to go into that class, which feels kind of limiting. An alternative would be to leave the class' name "Audio" and find a different one for the interface, e.g. "AudioInterface" or "IAudio". Which would you suggest, and why?

    Read the article

  • Ruby On Rails -

    - by Adam S
    I am trying to create a collection_select that is dependent on another collection_select, following the railscasts episode #88.. The database includes a schedule of all cruise ships arrival dates. See attached schema diagram,... (The arrow side of the lines indicate has_many, the non-pointy side indicates belongs_to) In the "booking/new" view I have a collection_select for choosing a cruiseline, then another collection_select will appear for selecting a ship, then another for selecting the date that ship is in. If I ONLY put a collection_select for shipshedule it works fine(because of the direct association with the bookings model). However, if I try to add a collection_select for cruiselines....it breaks(Im assuming because their isnt a direct association). The collection_select for cruiselines returns an undefined method error for "cruiseline_id".....if I simply use "id" the collection_select works, but of course isnt fully functional due to incorrect naming of the form field. I have tried "has_many :shipschedules, :through = :cruiseships" in the cruiseline model and "has_many :bookings, :through = :shipschedules" in the cruiseships model... As you can see in the diagram, I need to access the cruiseships and cruiselines model through the bookings model. I have the models set up so a cruiseline has_many cruiseships, a cruiseship has_many shipschedules, and a shipschedule has_many bookings. But, the bookings model cant directly access the cruiseline model. How do I accomplish this? THANKS!!! http://www.adamstockland.com/common/images/RoRf/PastedGraphic-2.png http://www.adamstockland.com/common/images/RoRf/PastedGraphic-1.png

    Read the article

  • undefined method `code' for nil:NilClass message with rails and a legacy database

    - by Jude Osborn
    I'm setting up a very simple rails 3 application to view data in a legacy MySQL database. The legacy database is mostly rails ORM compatible, except that foreign key fields are pluralized. For example, my "orders" table has a foreign key field to the "companies" table called "companies_id" (rather than "company_id"). So naturally I'm having to use the ":foreign_key" attribute of "belongs_to" to set the field name manually. I haven't used rails in a few years, but I'm pretty sure I'm doing everything right, yet I get the following error when trying to access "order.currency.code": undefined method `code' for nil:NilClass This is a very simple application so far. The only thing I've done is generate the application and a bunch of scaffolds for each of the legacy database tables. Then I've gone into some of the models to make adjustments to accommodate the above mentioned difference in database naming conventions, and added some fields to the views. That's it. No funny business. So my database tables look like this (relevant fields only): orders ------ id description invoice_number currencies_id currencies ---------- id code description My Order model looks like this: class Order < ActiveRecord::Base belongs_to :currency, :foreign_key=>'currencies_id' end My Currency model looks like this: class Currency < ActiveRecord::Base has_many :orders end The relevant view snippet looks like this: <% @orders.each do |order| %> <tr> <td><%= order.description %></td> <td><%= order.invoice_number %></td> <td><%= order.currency.code %></td> </tr> <% end %> I'm completely out of ideas. Any suggestions?

    Read the article

  • What are advantages of using a one-to-one table relationship? (MySQL)

    - by byronh
    What are advantages of using a one-to-one table relationship as opposed to simply storing all the data in one table? I understand and make use of one-to-many, many-to-one, and many-to-many all the time, but implementing a one-to-one relationship seems like a tedious and unnecessary task, especially if you use naming conventions for relating (php) objects to database tables. I couldn't find anything on the net or on this site that could supply a good real-world example of a one-to-one relationship. At first I thought it might be logical to separate 'users', for example, into two tables, one containing public information like an 'about me' for profile pages and one containing private information such as login/password, etc. But why go through all the trouble of using unnecessary JOINS when you can just choose which fields to select from that table anyway? If I'm displaying the user's profile page, obviously I would only SELECT id,username,email,aboutme etc. and not the fields containing their private info. Anyone care to enlighten me with some real-world examples of one-to-one relationships?

    Read the article

  • Passive FTP on Windows Server 2008 R2 using the IIS7 FTP-Server

    - by ntor
    Hello serverFault-community! During the last few days I have been setting up a Windows Server 2008 R2 in a VMware. I installed the standard FTP-Server on it by using the Webserver (IIS)-role. Everything works fine with accessing my FTP-Site with ftp://localhost in Firefox. I can also get access to it via the local IP of my Server. Actually everything works fine in my LAN. But here's my problem: I want to get access "from outside", using the external IP or a dyndns-URL. I have a LinkSys-Router in front of my Server, therefore I'm forwarding all the important ports. If you may now think "this idiot has probably forgotten some ports", I must dissappoint you. It even works getting access to my Server-Website and messing around in some WebInterfaces. The problem is my passive FTP (active works for me). I always get a timeout, when e.g. FileZilla waits for a response to the LIST-command. The one big thing I don't get, is, why my Server sends a response to the PASV-command, naming a port like 40918, even if I have restricted the data port range for my passive FTP ( in the IIS-Manager) to e.g. [5000-5009]. I simply don't want to open and forward all possible data ports! And another thing is, I can't specify a static external IP-adress for my server, since I don't own any. I hope I have explained my problem in a comprehensible way. If not, simply ask by posting a comment! LG ntor PS: I have already mainly tried following articles: Out Of Band FTP 7 shows "Operation timed out" How to Configure Windows Firewall for a Passive Mode FTP Server ServerFault --- Passive ftp on Server 2008 --- EDIT: --- There is one idea rising up in my mind: When I use FileZilla to connect by passive mode I always get something like this: 227 Entering Passive Mode (192,168,1,102,160,86) According to a Rhinosof-article FZ tries to connect on port "160*256+86 = 41046", although I have restricted the data ports (as mentioned above). Could this be caused by the router, that doesn't forward out-ports directly, but uses different ones? (-- The IP-Adress given is the local one, since I'm not able to define a static external in the IIS-Mgr)

    Read the article

  • libpam-ldapd not looking for secondary groups

    - by Jorge Suárez de Lis
    I'm migrating from libpam-ldap to libpam-ldapd. I'm having some trouble gathering the secondary groups from LDAP. On libpam-ldap, I had this on the /etc/ldap.conf file: nss_schema rfc2307bis nss_base_passwd ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es nss_base_shadow ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es nss_base_group ou=Groups,ou=CITIUS,dc=inv,dc=usc,dc=es nss_map_attribute uniqueMember member The mapping is there because I'm using groupOfNames instead of groupOfUniqueNames LDAP class for groups, so the attribute naming the members is named member instead of uniqueMember. Now, I want to do the same using libpam-ldapd but I can't get it to work. Here's the relevant part of my /etc/nslcd.conf: base passwd ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es base shadow ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es base group ou=Groups,ou=CITIUS,dc=inv,dc=usc,dc=es map group uniqueMember member And this is the debug output from nslcd, when a user is authenticated: nslcd: [8b4567] DEBUG: connection from pid=12090 uid=0 gid=0 nslcd: [8b4567] DEBUG: nslcd_passwd_byuid(4004) nslcd: [8b4567] DEBUG: myldap_search(base="ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es", filter="(&(objectClass=posixAccount)(uidNumber=4004))") nslcd: [8b4567] DEBUG: ldap_initialize(ldap://172.16.54.31/) nslcd: [8b4567] DEBUG: ldap_set_rebind_proc() nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_PROTOCOL_VERSION,3) nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_DEREF,0) nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_TIMELIMIT,10) nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_TIMEOUT,10) nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_NETWORK_TIMEOUT,10) nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_REFERRALS,LDAP_OPT_ON) nslcd: [8b4567] DEBUG: ldap_set_option(LDAP_OPT_RESTART,LDAP_OPT_ON) nslcd: [8b4567] DEBUG: ldap_simple_bind_s("uid=ubuntu,ou=Applications,ou=CITIUS,dc=inv,dc=usc,dc=es","*****") (uri="ldap://172.16.54.31/") nslcd: [8b4567] connected to LDAP server ldap://172.16.54.31/ nslcd: [8b4567] DEBUG: ldap_result(): end of results nslcd: [7b23c6] DEBUG: connection from pid=15906 uid=0 gid=2000 nslcd: [7b23c6] DEBUG: nslcd_pam_authc("jorge.suarez","","su","***") nslcd: [7b23c6] DEBUG: myldap_search(base="ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es", filter="(&(objectClass=posixAccount)(uid=jorge.suarez))") nslcd: [7b23c6] DEBUG: ldap_initialize(ldap://172.16.54.31/) nslcd: [7b23c6] DEBUG: ldap_set_rebind_proc() nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_PROTOCOL_VERSION,3) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_DEREF,0) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_TIMELIMIT,10) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_TIMEOUT,10) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_NETWORK_TIMEOUT,10) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_REFERRALS,LDAP_OPT_ON) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_RESTART,LDAP_OPT_ON) nslcd: [7b23c6] DEBUG: ldap_simple_bind_s("uid=ubuntu,ou=Applications,ou=CITIUS,dc=inv,dc=usc,dc=es","*****") (uri="ldap://172.16.54.31/") nslcd: [7b23c6] connected to LDAP server ldap://172.16.54.31/ nslcd: [7b23c6] DEBUG: ldap_initialize(ldap://172.16.54.31/) nslcd: [7b23c6] DEBUG: ldap_set_rebind_proc() nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_PROTOCOL_VERSION,3) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_DEREF,0) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_TIMELIMIT,10) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_TIMEOUT,10) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_NETWORK_TIMEOUT,10) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_REFERRALS,LDAP_OPT_ON) nslcd: [7b23c6] DEBUG: ldap_set_option(LDAP_OPT_RESTART,LDAP_OPT_ON) nslcd: [7b23c6] DEBUG: ldap_simple_bind_s("uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es","*****") (uri="ldap://172.16.54.31/") nslcd: [7b23c6] connected to LDAP server ldap://172.16.54.31/ nslcd: [7b23c6] DEBUG: myldap_search(base="uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es", filter="(objectClass=posixAccount)") nslcd: [7b23c6] DEBUG: ldap_unbind() nslcd: [3c9869] DEBUG: connection from pid=15906 uid=0 gid=2000 nslcd: [3c9869] DEBUG: nslcd_pam_sess_o("jorge.suarez","uid=jorge.suarez,ou=People,ou=CITIUS,dc=inv,dc=usc,dc=es","su","/dev/pts/7","","jorge.suarez") It seems to me that it won't even try to look for groups. What I am doing wrong? I can't see anything relevant to my problem information on the docs. I'm probably not understanding how the map option works.

    Read the article

  • Intermittent Disconnection of Client Computers from Domain Server

    - by dilip nagle
    The Background: I have Windows 2008 server Enterprise Version with 25 user cal licences. It has a domain and all users and a network shared HP printer in it. The Server has two network cards and both these cards as well as all client machines are on IP addressing scheme of 192.168.1.* with subnetmask 255.255.255.0. Of the two network cards viz. 192.168.1.231 and 192.168.1.233, only 192.168.1.231 is registered with DNS. In 192.168.1.233(i.e. 2nd network card) has default getway as 192.168.1.231 and dns address as 192.168.1.231. The Server has three hard disks with capacities as 500gb, 500gb and 1TB and are partitioned as (C,D,E), (F,G) and (K) with partition K having all user data into various Shared Folders. Each of these folders(On Partition K), are mapped onto each user's computer as per the right of access given to them. The Problem: The Server was installed about 6 months ago and till date not even once, the Server has Hung or has given any problem. All the Clients computers are able to run the web based software from their computers via ip address, e.g. http://192.168.1.231/webERP/default.aspx. However, occassionally, when any client computer tries to browse network mappings, it hangs. Again, there is no fixed pattern. This may happen after running smoothly for say 3 days. On each Client's machine, the network settings are as follows: IP Address: 192.168.1.* where * is 1,2,3 .... Sunnetmask: 255.255.255.0 defauly getway: 192.168.1.231 Which is a server card and DNS address. preferred DNS Server: 192.168.1.231 In Advanced Tab under Wins: LMHostLookup is Unticked and default is radio buttoned. Ideally, I would have loved to have Disabled NETBIOS over TCP/IP but some network printers do not get accessed if this option is enabled(ie. Radio Buttoned). Bacause Disabling Netbios will drastically reduce traffic of NETBIOS broadcasting to all the computers on the net to do naming resolution. On Server, I have WINs Running which I have Scavanged Records, verified Database Integrity etc, removed Tombstoned Records etc. The Critical Errors shown only once a day when the server is statred are 4224(WINS) and 12923 - Server Licencing failed to Update DNS Record. I fail to understand as why do client machines HANG when they try to browse mapped network shared folders on K Drive. Kindly Advice

    Read the article

  • Expectations for NTFS file recovery

    - by Fred Hamilton
    Yesterday I booted my XP system, and as I looked up a minute later I saw the light blue screen and tail-end of that pre-boot diskcheck Windows sometimes does if it finds an error (or was previously told to run a diskcheck drung the next boot). I didn't worry about it at the moment... But then I looked at my "scratch" disk, which was a 70% full, 750GB hard disk...and it now looks like it has been freshly formatted. It doesn't have a single file on it, just the hidden "System Volume Information" file and 750GB of freedom from data. I looked at some of the recovery tools from the Free NTFS partition recovery question and decided to try PC INSPECTOR™ File Recovery 4.x initially. It ran overnight and afterwards returned a list of thousands of files it could recover. The odd thing was that the filenames were lost, but the file extensions were not (WTF?). And all of the files were exactly 1,472kB in size. I recovered a dozen PDFs as a test, and 80% of them displayed OK despite being padded out to 1.5MB (though I assume any files 1472kB are hosed). My primary question is: Is this the best I can expect from any file recovery software when trying to recover NTFS files? Or is there perhaps something better out there? I assume this is as good as it gets, but wanted to check in with the experts first. Bonus questions: What might have happened to my drive? I didn't intentionally format it. I've never seen a disk error cause the drive to suddenly become a clean, reformatted drive. Could some malicious/confused software have told my PC to format my disk on reboot? Is that even a function Windows XP has? Why can the file extensions be recovered but not the filename? Does NTFS really treat them as separate entities? I thought I had 8.3 naming turned off, but maybe that had something to do with it. Or maybe it looks at the data in the file and guesses the extension?

    Read the article

  • debian packages version convention

    - by JackWu
    I'm using debian/Ubuntu, and get confused about versions of packages. When using dpkg -l command, I get: ii vim 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor ii vim-common 2:7.3.429-2ubuntu2.1 Vi IMproved - Common files ii vim-runtime 2:7.3.429-2ubuntu2.1 Vi IMproved - Runtime files ii vim-tiny 2:7.3.429-2ubuntu2.1 Vi IMproved - enhanced vi editor - compact version ii virt-what 1.11-1 detect if we are running in a virtual machine ii w3m 0.5.3-5ubuntu1 WWW browsable pager with excellent tables/frames support ii watershed 6 reduce superfluous executions of idempotent command ii wget 1.13.4-2ubuntu1 retrieves files from the web ii whiptail 0.52.11-2ubuntu10 Displays user-friendly dialog boxes from shell scripts ii whoopsie 0.1.33 Ubuntu crash database submission daemon ii wimlib9 1.5.0-1~webupd8~precise Library to extract, create, modify, and mount WIM files ii wimtools 1.5.0-1~webupd8~precise Tools to extract, create, modify, and mount WIM files ii wireless-tools 30~pre9-5ubuntu2 Tools for manipulating Linux Wireless Extensions ii wpasupplicant 0.7.3-6ubuntu2.1 client support for WPA and WPA2 (IEEE 802.11i) ii x11-common 1:7.6+12ubuntu2 X Window System (X.Org) infrastructure ii x11-utils 7.6+4ubuntu0.1 X11 utilities ii xauth 1:1.0.6-1 X authentication utility ii xbitmaps 1.1.1-1 Base X bitmaps ii xclip 0.12-1 command line interface to X selections ii xfonts-encodings 1:1.0.4-1ubuntu1 Encodings for X.Org fonts ii xfonts-utils 1:7.6+1 X Window System font utility programs ii xkb-data 2.5-1ubuntu1.3 X Keyboard Extension (XKB) configuration data ii xml-core 0.13 XML infrastructure and XML catalog file support rc xpdf 3.02-21build1 Portable Document Format (PDF) reader ii xterm 271-1ubuntu2.1 X terminal emulator ii xz-lzma 5.1.1alpha+20110809-3 XZ-format compression utilities - compatibility commands ii xz-utils 5.1.1alpha+20110809-3 XZ-format compression utilities ii zabbix-agent 1:1.8.11-1 network monitoring solution - agent ii zlib1g 1:1.2.3.4.dfsg-3ubuntu4 compression library - runtime ii zlib1g-dev 1:1.2.3.4.dfsg-3ubuntu4 compression library - development ii zsh 4.3.17-1ubuntu1 shell with lots of features The third column is version, but it all messed up in a way I can't understand. I mean, different packages use total different naming specification. Here are the major questions: Why there are ubuntu in them, and there are not? what all the special -~+ mean? alpha and build, dfsg, what are they? Can I just use them casually? vim and other packages have 2:, what does that mean? How version comparison works, since they can be so different? Can anyone please explain this to me? Or where can I find an official document? Thanks in advance.

    Read the article

  • ntpdate works, but ntpd can't synchronize

    - by dafydd
    This is in RHEL 5.5. First, ntpdate to the remote host works: $ ntpdate XXX.YYY.4.21 24 Oct 16:01:17 ntpdate[5276]: adjust time server XXX.YYY.4.21 offset 0.027291 sec Second, here are the server lines in my /etc/ntp.conf. All restrict lines have been commented out for troubleshooting. server 127.127.1.0 server XXX.YYY.4.21 I execute service ntpd start and check with ntpq: $ ntpq ntpq> peer remote refid st t when poll reach delay offset jitter ============================================================================== *LOCAL(0) .LOCL. 5 l 36 64 377 0.000 0.000 0.001 timeserver.doma .LOCL. 1 u 39 128 377 0.489 51.261 58.975 ntpq> opeer remote local st t when poll reach delay offset disp ============================================================================== *LOCAL(0) 127.0.0.1 5 l 40 64 377 0.000 0.000 0.001 timeserver.doma XXX.YYY.22.169 1 u 43 128 377 0.489 51.261 58.975 XXX.YYY.22.169 is the address of the host I'm working on. A reverse lookup on the IP address in my ntp.conf file validates that the ntpq output is correctly naming the remote server. However, as you can see, it appears to just roll over to my .LOCL. time server. Also, ntptrace just returns the local time server, and ntptrace XXX.YYY.4.21 times out. $ ntptrace localhost.localdomain: stratum 6, offset 0.000000, synch distance 0.948181 $ ntptrace XXX.YYY.4.21 XXX.YYY.4.21: timed out, nothing received ***Request timed out This looks like my ntp daemon is just querying itself. I am thinking about the possibility that the router-I-don't-control between my test network timeserver and the corporate network timeserver is blocking on source port. (I think ntpdate sends on port 123, which gets it around that filter and is why I can't use it while ntpd is running.) I have email in to the network folks to check that. Finally, telnet XXX.YYY.4.21 123 never times out or completes a connection. The questions: What am I missing, here? What else can I check to try to figure out where this connection is failing? Would strace ntptrace XXX.YYY.4.21 show me the source port ntptrace is sending from? I can deconstruct most strace calls, but I can't figure out the location of that datum. If I can't directly examine the gateway router between my test network and the timeserver, how might I build evidence that it's responsible for these disconnections? Alternately, how might I rule it out?

    Read the article

  • Azure &ndash; Part 6 &ndash; Blob Storage Service

    - by Shaun
    When migrate your application onto the Azure one of the biggest concern would be the external files. In the original way we understood and ensure which machine and folder our application (website or web service) is located in. So that we can use the MapPath or some other methods to read and write the external files for example the images, text files or the xml files, etc. But things have been changed when we deploy them on Azure. Azure is not a server, or a single machine, it’s a set of virtual server machine running under the Azure OS. And even worse, your application might be moved between thses machines. So it’s impossible to read or write the external files on Azure. In order to resolve this issue the Windows Azure provides another storage serviec – Blob, for us. Different to the table service, the blob serivce is to be used to store text and binary data rather than the structured data. It provides two types of blobs: Block Blobs and Page Blobs. Block Blobs are optimized for streaming. They are comprised of blocks, each of which is identified by a block ID and each block can be a maximum of 4 MB in size. Page Blobs are are optimized for random read/write operations and provide the ability to write to a range of bytes in a blob. They are a collection of pages. The maximum size for a page blob is 1 TB.   In the managed library the Azure SDK allows us to communicate with the blobs through these classes CloudBlobClient, CloudBlobContainer, CloudBlockBlob and the CloudPageBlob. Similar with the table service managed library, the CloudBlobClient allows us to reach the blob service by passing our storage account information and also responsible for creating the blob container is not exist. Then from the CloudBlobContainer we can save or load the block blobs and page blobs into the CloudBlockBlob and the CloudPageBlob classes.   Let’s improve our exmaple in the previous posts – add a service method allows the user to upload the logo image. In the server side I created a method name UploadLogo with 2 parameters: email and image. Then I created the storage account from the config file. I also add the validation to ensure that the email passed in is valid. 1: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 2: var accountContext = new DynamicDataContext<Account>(storageAccount); 3:  4: // validation 5: var accountNumber = accountContext.Load() 6: .Where(a => a.Email == email) 7: .ToList() 8: .Count; 9: if (accountNumber <= 0) 10: { 11: throw new ApplicationException(string.Format("Cannot find the account with the email {0}.", email)); 12: } Then there are three steps for saving the image into the blob service. First alike the table service I created the container with a unique name and create it if it’s not exist. 1: // create the blob container for account logos if not exist 2: CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient(); 3: CloudBlobContainer container = blobStorage.GetContainerReference("account-logo"); 4: container.CreateIfNotExist(); Then, since in this example I will just send the blob access URL back to the client so I need to open the read permission on that container. 1: // configure blob container for public access 2: BlobContainerPermissions permissions = container.GetPermissions(); 3: permissions.PublicAccess = BlobContainerPublicAccessType.Container; 4: container.SetPermissions(permissions); And at the end I combine the blob resource name from the input file name and Guid, and then save it to the block blob by using the UploadByteArray method. Finally I returned the URL of this blob back to the client side. 1: // save the blob into the blob service 2: string uniqueBlobName = string.Format("{0}_{1}.jpg", email, Guid.NewGuid().ToString()); 3: CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName); 4: blob.UploadByteArray(image); 5:  6: return blob.Uri.ToString(); Let’s update a bit on the client side application and see the result. Here I just use my simple console application to let the user input the email and the file name of the image. If it’s OK it will show the URL of the blob on the server side so that we can see it through the web browser. Then we can see the logo I’ve just uploaded through the URL here. You may notice that the blob URL was based on the container name and the blob unique name. In the document of the Azure SDK there’s a page for the rule of naming them, but I think the simple rule would be – they must be valid as an URL address. So that you cannot name the container with dot or slash as it will break the ADO.Data Service routing rule. For exmaple if you named the blob container as Account.Logo then it will throw an exception says 400 Bad Request.   Summary In this short entity I covered the simple usage of the blob service to save the images onto Azure. Since the Azure platform does not support the file system we have to migrate our code for reading/writing files to the blob service before deploy it to Azure. In order to reducing this effort Microsoft provided a new approch named Drive, which allows us read and write the NTFS files just likes what we did before. It’s built up on the blob serivce but more properly for files accessing. I will discuss more about it in the next post.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How to create a new Team Project Collection in TFS2010:

    - by jehan
    TFS 2010 has introduced the notion of Team Project Collection (TPC).  I have already discussed about TPC in my earlier post, you can check it out here. In this post, I will demonstrate how to create a new Team Project Collection in TFS2010. First, you have to open the TFS Administration Console (Start à All Programs à Microsoft Team Foundation Server 2010 à Team Foundation Server Administration Console), expand the Application Tier node in TFS Administration Console and click on Team Project Collection. Here you will see the TPC’s which are already exist, I am having only one TPC named New Collection and I’m going to create a new TPC called Demo Collection. To create a new Team Project Collection, you need to click on Create Collection; it will open the Create New Team Project Collection window.     Under the Name tab, you have to enter the name of Collection which you want to give for your new TPC (I naming it as Demo Collection). You can also provide some description about your TPC in Description tab which is optional and click next. Here, you need to enter the name of SQL Server Instance where you want your new TPC data to reside. You have the option either to choose the creating a Database for this TPC or use the already existing empty database and then click next.   In next screen, you have to choose SharePoint configuration. Here you have the options to either configure SharePoint Site for TPC at default collections or you can specify the your existing SharePoint site and  you can also choose not  to configure the SharePoint for this collection, if you choose last option then you cannot configure the Share Point sites for the all the Team Projects under this Project Collection. You also have the flexibility to create a Share Point site for this TPC later on, then if you need you have to configure SharePoint site for the existing team projects manually.   In next screen, you will have the Reports configuration. Here you have the options to either configure the Reports for TPC at default path or you can specify the path for at existing Reports folder, you can also choose not to configure the Reports for this collection, if you choose last option then you cannot create  the Reports  for the all the Team Projects under this Project Collection. Here also you can enable reporting for this TPC later on. The next screen is related to Lab Management Configuration, Lab Management is the new feature in TFS2010 which enables the users to create and manage virtual test environments where you can deploy and test your application. There are no options available here as I don’t have the Lab Management configured for my Team Foundation Server. The next screen is Review Configuration window, which will show up all the configuration settings you have specified, so that you can review the configurations before creating the Team Project Collection. If you want to make any changes to the configurations then you can go back to the previous windows and can make the changes. After Reviewing the configuration settings, you can click on verify button. Which will verify that if you’re Team Project Collection is ready to be created or not, it will show up the errors and warning (if any) which can make your Team Project Collection fail. You can then choose to create the Team Project Collection if the verify option doesn’t throw any warnings and errors. If the verify option throws any errors, then it is strongly suggested that you have to first rectify the issues then only go for TPC creation especially in case of warnings as it is a common practice to overlook the warnings.   If you choose the create TPC option, then it will start the process of creating a Team Project Collection  and once its completed you can check the status of configuration different components  during Team Project Collection. You can see in below screen that all the components are configured successfully.   In next screen, you can find the location of log file created for this Team Project Creation, this log file is really important in case of Team Project creation failure because it will help you to find  the root cause for the failure. Now, you can see that the New Team Projection (Demo Collection) which was created is now available in Team Foundation Collection tab and its status is Online.   You can now try to connect to this Team Project Collection from Team Explorer. Choose the newly created Team Project Collection and click on connect.     This Team Project Collection is empty because no Team Projects are created yet. Now, you can create the new Team Projects and start working.

    Read the article

  • MySQL Connector/Net 6.4.6 Maintenance Release has been released

    - by fernando
    MySQL Connector/Net 6.4.6, a new version of the all-managed .NET driver for MySQL has been released.  This is a maintenance release and is recommended for use in production environments. It is appropriate for use with MySQL server versions 5.0-5.6. This is intended to be the final release for Connector/NET 6.4. It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.4.6 version of MySQL Connector/Net brings the following fixes: - Fix for List.Contains generates a bunch of ORs instead of more efficient IN clause in   LINQ to Entities (Oracle bug #14016344, MySql bug #64934). - Fix for error when trying to change the name of an Index on the Indexes/Keys editor; along with this fix now users can change the Index type of a new Index which could not be done   in previous versions, and when changing the Index name the change is reflected on the list view at the left side of the Index/Keys editor (Oracle bug #13613801). - Fix for stored procedure call using only its name with EF code first (MySql bug #64999, Oracle bug #14008699). - Fix for performance issue in generated EF query: .NET StartsWith/Contains/EndsWith produces MySql's locate instead of Like (MySql bug #64935, Oracle bug #14009363). - Fix for script generated for code first contains wrong alter table and wrong declaration for byte[] (MySql bug #64216, Oracle bug #13900091). - Fix for Exception thrown when using cascade delete in an EDM Model-First in Entity Framework (Oracle bug #14008752, MySql bug #64779). - Fix for Session locking issue with MySqlSessionStateStore (MySql bug #63997, Oracble bug #13733054). - Fixed deleting a user profile using Profile provider (MySQL bug #64409, Oracle bug #13790123). - Fix for bug Cannot Create an Entity with a Key of Type String (MySQL bug #65289, Oracle bug #14540202). This fix checks if the type has a FixedLength facet set in order to create a char otherwise should create varchar, mediumtext or longtext types when using a String CLR type in Code First or Model First also tested in Database First. Unit tests added for Code First and ProviderManifest. - Fix for bug "CacheServerProperties can cause 'Packet too large' error" (MySQL Bug #66578 Orabug #14593547). - Fix for handling unnamed parameter in MySQLCommand. This fix allows the mysqlcommand to handle parameters without requiring naming (e.g. INSERT INTO Test (id,name) VALUES (?, ?) ) (MySQL Bug #66060, Oracle bug #14499549). - Fixed inheritance on Entity Framework Code First scenarios. Discriminator column is created using its correct type as varchar(128) (MySql bug #63920 and Oracle bug #13582335). - Fixed "Trying to customize column precision in Code First does not work" (MySql bug #65001, Oracle bug #14469048). - Fixed bug ASP.NET Membership database fails on MySql database UTF32 (MySQL bug #65144, Oracle bug #14495292). - Fix for MySqlCommand.LastInsertedId holding only 32 bit values (MySql bug #65452, Oracle bug #14171960) by changing   several internal declaration of lastinsertid from int to long. - Fixed "Decimal type should have digits at right of decimal point", now default is 2, but user's changes in   EDM designer are recognized (MySql bug #65127, Oracle bug #14474342). - Fix for NullReferenceException when saving an uninitialized row in Entity Framework (MySql bug #66066, Oracle bug #14479715). - Fix for error when calling RoleProvider.RemoveUserFromRole(): causes an exception due to a wrong table being used (MySql bug #65805, Oracle bug #14405338). - Fix for "Memory Leak on MySql.Data.MySqlClient.MySqlCommand", too many MemoryStream's instances created (MySql bug #65696, Oracle bug #14468204). - Small improvement on MySqlPoolManager CleanIdleConnections for better mysqlpoolmanager idlecleanuptimer at startup (MySql bug #66472 and Oracle bug #14652624). - Fix for bug TIMESTAMP values are mistakenly represented as DateTime with Kind = Local (Mysql bug #66964, Oracle bug #14740705). - Fix for bug Keyword not supported. Parameter name: AttachDbFilename (Mysql bug #66880, Oracle bug #14733472). - Added support to MySql script file to retrieve data when using "SHOW" statements. - Fix for Package Load Failure in Visual Studio 2005 (MySql bug #63073, Oracle bug #13491674). - Fix for bug "Unable to connect using IPv6 connections" (MySQL bug #67253, Oracle bug #14835718). - Added auto-generated values for Guid identity columns (MySql bug #67450, Oracle bug #15834176). - Fix for method FirstOrDefault not supported in some LINQ to Entities queries (MySql bug #67377, Oracle bug #15856964). The release is available to download at http://dev.mysql.com/downloads/connector/net/6.4.html Documentation ------------------------------------- You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Happy 3rd Birthday SilverlightCream!

    - by Dave Campbell
    Happy 3rd Birthday!     Yesterday (May 16) was the 'Birthday' of SilverlightCream, which started just after MIX in 2007 with a post "Interesting Silverlight posts today: Silverlight Control & Silverlight Pad". Too many good posts flying around led me to want to archive them, particularly since I was being aggregated at a new site Silverlight.net, and I could give some of that 'reach' to the community. Saturday's post was number 862, and as of that post, there were 5697 blog posts archived in the database all tagged up and searchable at SilverlightCream.com using the search page. The search needs to be better, and that's another discussion, but it does work. The blog didn't begin life as the SilverlightCream blog, as is obvious from the name, but once I realized people were following it closely, I've tried to keep the signal-to-noise ratio very high. I even secured another blog for when I just want to rant about something to keep that stuff out of this one :) If you've been around since MIX07 days you've heard all this, but after talking to some people at MIX10 I realized not everyone knows all the ways the information is presented, so I figured doing a post like this once a year probably isn't a bad idea :) I scrounge through an ever-growing list of blogs (right now sitting at 505) looking for good stuff. I try to spin through the list every day, but with the list growing that large, it's getting tough. I usually use it as a background task while working or watching TV. If I just sit and go through the blogs it takes about an hour. The list is long enough now that from time to time, I'll only get partway through it and have 10 to 13 entries, so I'll just stop there and go on the next day... I don't like to have more than 15 in any single post. It's all pattern recognition as in "seen that", "seen that", "that's new", etc... so if you're a blogger, look at a heading below for some comments about blogging from my perspective. When I see something new, I make sure you're not pulling a 'Mike Taulty' on me and dumping 6 or 8 new posts in one day :), and I tag the ones I want to review. If there's not a lot going on, I may just push the posts as I come across them. Some days there may be 60 posts in that 'to review' list! Some are non-Silverlight, some are essentially duplicates of others, some are demos, ads, new releases of something, session materials, etc. I push lots of material into a database at WynApse.com, and the "Tagged Posts" menu on the left sidebar there takes you to a tag cloud of (at this very moment) "9224 articles tagged 13915 different ways using 459 unique tags". There are links in there on Gibson guitars, Jazz Guitar instructional stuff, Ford F-250 links, and tons of technical and non-technical stuff I've been aggregating for about 5 years now. So when I decide to blog (or shoutout) something, I first push it into the database at WynApse.com. Then I tag it all up and push it into the database at SilverlightCream.com. Then it gets pushed to @SilverlightNews. For a little over a year now, we're tracking unique IP hits on posts launched from either the blog post or from one of the SilverlightCream.com pages, and the posts with top hits from unique IP addresses in the last 7 days are displayed in a 'Skim' page at SilverlightCream... and that page needs work as well. The Skim page and tracking was the brainchild of my buddy Michael Washington. What I blog/shoutout After some time doing posts, I decided there were things that probably have no need to be searchable, but are good information, so I post those as 'Shoutouts'. Eventually I also decided the Shoutouts should get posted to @SilverlightNews, and that's now taking place. Notes to bloggers Remember I said spinning throught the Big List-o-BlogsTM is pattern recognition... that means I don't spend a lot of time on any individual blog deciding if it has new content. If you're familiar with the term 'Above the Fold', then you're probably ok. If I have to scroll the page to see if there's something new, or wade through some maze of menus, I'm probably going to miss new stuff. Likewise if you only show the latest on the front page and make it a puzzle to find the rest of them, or if you make the titles and initial graphics almost identical to the previous article, I'll miss it. Another thing is name/brand-recognition. Far be it for me (WynApse) to comment on someone blogging with a pseudonym, but if you want to get get some recognition, you are going to want your name to be available somewhere. I can think right off the top of my head of a couple good blogs that I have no idea of the individuals' real names. I can pull that off a bit because I've been around so long almost everyone knows who I am, but if you're new to the blog-o-sphere, being able to be name-recognized is as important as getting your brand out there. Kick my tires Finally, stuff happens... I may hit the wrong key and delete your blog, or a post might slip past me and I not realize it's new because of the naming, and never blog it. If you think I missed something, send me an email or use the submit page at SilverlightCream.com. Some bloggers have figured out that if they submit (one way or another) to me, their posts will go out next. I try to honor anyone that takes the time to submit with a quicker 'Cream posting. Thanks! Finally, thanks to everyone that contributes to the community as a whole... the blogs, the videos, and the presentations. A special thanks to everyone that reads SilverlightCream, or follows @WynApse or @SilverlightNews. Keep it all coming, and... Stay in the 'Light

    Read the article

  • SQL SERVER – Replace a Column Name in Multiple Stored Procedure all together

    - by pinaldave
    I receive a lot of emails every day. I try to answer each and every email and comments on Facebook and Twitter. I prefer communication on social media as this gives opportunities to others to read the questions and participate along with me. There is always some question which everyone likes to read and remember. Here is one of the questions which I received in email. I believe the same question will be there any many developers who are beginning with SQL Server. I decided to blog about it so everyone can read it and participate. “I am beginner in SQL Server. I have a very interesting situation and need your help. I am beginner to SQL Server and that is why I do not have access to the production server and I work entirely on the development server. The project I am working on is also in the infant stage as well. In product I had to create a multiple tables and every table had few columns. Later on I have written Stored Procedures using those tables. During a code review my manager has requested to change one of the column which I have used in the table. As per him the naming convention was not accurate. Now changing the columname in the table is not a big issue. I figured out that I can do it very quickly either using T-SQL script or SQL Server Management Studio. The real problem is that I have used this column in nearly 50+ stored procedure. This looks like a very mechanical task. I believe I can go and change it in nearly 50+ stored procedure but is there a better solution I can use. Someone suggested that I should just go ahead and find the text in system table and update it there. Is that safe solution? If not, what is your solution. In simple words, How to replace a column name in multiple stored procedure efficiently and quickly? Please help me here with keeping my experience and non-production server in mind.” Well, I found this question very interesting. Honestly I would have preferred if this question was asked on my social media handles (Facebook and Twitter) as I am very active there and quite often before I reach there other experts have already answered this question. Anyway I am now answering the same question on the blog so all of us can participate here and come up with an appropriate answer. Here is my answer - “My Friend, I do not advice to touch system table. Please do not go that route. It can be dangerous and not appropriate. The issue which you faced today is what I used to face in early career as well I still face it often. There are two sets of argument I have observed – there are people who see no value in the name of the object and name objects like obj1, obj2 etc. There are sets of people who carefully chose the name of the object where object name is self-explanatory and almost tells a story. I am not here to take any side in this blog post – so let me go to a quick solution for your problem. Note: Following should not be directly practiced on Production Server. It should be properly tested on development server and once it is validated they should be pushed to your production server with your existing deployment practice. The answer is here assuming you have regular stored procedures and you are working on the Development NON Production Server. Go to Server Note >> Databases >> DatabaseName >> Programmability >> Stored Procedure Now make sure that Object Explorer Details are open (if not open it by clicking F7). You will see the list of all the stored procedures there. Now you will see a list of all the stored procedures on the right side list. Select either all of them or the one which you believe are relevant to your query. Now… Right click on the stored procedures >> SELECT DROP and CREATE to >> Now select New Query Editor Window or Clipboard. Paste the complete script to a new window if you have selected Clipboard option. Now press Control+H which will bring up the Find and Replace Screen. In this screen insert the column to be replaced in the “Find What”box and new column name into “Replace With” box. Now execute the whole script. As we have selected DROP and CREATE to, it will created drop the old procedure and create the new one. Another method would do all the same procedure but instead of DROP and CREATE manually replace the CREATE word with ALTER world. There is a small advantage in doing this is that if due to any reason the error comes up which prevents the new stored procedure to be created you will have your old stored procedure in the system as it is. “ Well, this was my answer to the question which I have received. Do you see any other workaround or solution? Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Stored Procedure, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • MySQL Connector/Net 6.5.5 Maintenance Release has been released

    - by fernando
    MySQL Connector/Net 6.5.5, a new maintenance release of our 6.5 series, has been released.  This release is GA quality and is appropriate for use in production environments.  Please note that 6.6 is our latest driver series and is the recommended product for development. It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.5.5 version of MySQL Connector/Net brings the following fixes: - Fix for ArgumentNull exception when using Take().Count() in a LINQ to Entities query (bug MySql #64749, Oracle bug #13913047). - Fix for type varchar changed to bit when saving in Table Designer (Oracle bug #13916560). - Fix for error when trying to change the name of an Index on the Indexes/Keys editor; along with this fix now users can change the Index type of a new Index which could not be done   in previous versions, and when changing the Index name the change is reflected on the list view at the left side of the Index/Keys editor (Oracle bug #13613801). - Fix for stored procedure call using only its name with EF code first (MySql bug #64999, Oracle bug #14008699). - Fix for List.Contains generates a bunch of ORs instead of more efficient IN clause in   LINQ to Entities (Oracle bug #14016344, MySql bug #64934). - Fix for performance issue in generated EF query: .NET StartsWith/Contains/EndsWith produces MySql's locate instead of Like (MySql bug #64935, Oracle bug #14009363). - Fix for script generated for code first contains wrong alter table and wrong declaration for byte[] (MySql bug #64216, Oracle bug #13900091). - Fix and code contribution for bug Timed out sessions are removed without notification which allow to enable the Expired CallBack when Session Provider times out any session (bug MySql #62266 Oracle bug # 13354935) - Fix for Exception thrown when using cascade delete in an EDM Model-First in Entity Framework (Oracle bug #14008752, MySql bug #64779). - Fix for Session locking issue with MySqlSessionStateStore (MySql bug #63997, Oracble bug #13733054). - Fixed deleting a user profile using Profile provider (MySQL bug #64470, Oracle bug #13790123) - Fix for bug Cannot Create an Entity with a Key of Type String (MySQL bug #65289, Oracle bug #14540202). This fix checks if the type has a FixedLength facet set in order to create a char otherwise should create varchar, mediumtext or longtext types when using a String CLR type in Code First or Model First also tested in Database First. Unit tests added for Code First and ProviderManifest. - Fix for bug "CacheServerProperties can cause 'Packet too large' error". The issue was due to a missing reading of Max_allowed_packet server property when CacheServerProperties is in true, since the value was read only in the first connection but the following pooled connections had a wrong value causing a Packet too large error. Including also a unit test for this scenario. All unit test passed. MySQL Bug #66578 Orabug #14593547. - Fix for handling unnamed parameter in MySQLCommand. This fix allows the mysqlcommand to handle parameters without requiring naming (e.g. INSERT INTO Test (id,name) VALUES (?, ?) ) (MySQL Bug #66060, Oracle bug #14499549). - Fixed inheritance on Entity Framework Code First scenarios. Discriminator column is created using its correct type as varchar(128) (MySql bug #63920 and Oracle bug #13582335). - Fixed "Trying to customize column precision in Code First does not work" (MySql bug #65001, Oracle bug #14469048). - Fixed bug ASP.NET Membership database fails on MySql database UTF32 (MySQL bug #65144, Oracle bug #14495292). - Fix for MySqlCommand.LastInsertedId holding only 32 bit values (MySql bug #65452, Oracle bug #14171960) by changing   several internal declaration of lastinsertid from int to long. - Fixed "Decimal type should have digits at right of decimal point", now default is 2, but user's changes in   EDM designer are recognized (MySql bug #65127, Oracle bug #14474342). - Fix for NullReferenceException when saving an uninitialized row in Entity Framework (MySql bug #66066, Oracle bug #14479715). - Fix for error when calling RoleProvider.RemoveUserFromRole(): causes an exception due to a wrong table being used (MySql bug #65805, Oracle bug #14405338). - Fix for "Memory Leak on MySql.Data.MySqlClient.MySqlCommand", too many MemoryStream's instances created (MySql bug #65696, Oracle bug #14468204). - Added ANTLR attribution notice (Oracle bug #14379162). - Fixed Entity Framework + mysql connector/net in partial trust throws exceptions (MySql bug #65036, Oracle bug #14668820). - Added support in Parser for Datetime and Time types with precision when using Server 5.6 (No bug Number). - Small improvement on MySqlPoolManager CleanIdleConnections for better mysqlpoolmanager idlecleanuptimer at startup (MySql bug #66472 and Oracle bug #14652624). - Fix for bug TIMESTAMP values are mistakenly represented as DateTime with Kind = Local (Mysql bug #66964, Oracle bug #14740705). - Fix for bug Keyword not supported. Parameter name: AttachDbFilename (Mysql bug #66880, Oracle bug #14733472). - Added support to MySql script file to retrieve data when using "SHOW" statements. - Fix for Package Load Failure in Visual Studio 2005 (MySql bug #63073, Oracle bug #13491674). - Fix for bug "Unable to connect using IPv6 connections" (MySQL bug #67253, Oracle bug #14835718). - Added auto-generated values for Guid identity columns (MySql bug #67450, Oracle bug #15834176). - Fix for method FirstOrDefault not supported in some LINQ to Entities queries (MySql bug #67377, Oracle bug #15856964). The release is available to download at http://dev.mysql.com/downloads/connector/net/6.5.html Documentation ------------------------------------- You can view current Connector/Net documentation at http://dev.mysql.com/doc/refman/5.5/en/connector-net.html You can find our team blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our forums at http://forums.mysql.com/. Enjoy and thanks for the support! 

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • SQL SERVER – Number-Crunching with SQL Server – Exceed the Functionality of Excel

    - by Pinal Dave
    Imagine this. Your users have developed an Excel spreadsheet that extracts data from your SQL Server database, manipulates that data through the use of Excel formulas and, possibly, some VBA code which is then used to calculate P&L, hedging requirements or even risk numbers. Management comes to you and tells you that they need to get rid of the spreadsheet and that the results of the spreadsheet calculations need to be persisted on the database. SQL Server has a very small set of functions for analyzing data. Excel has hundreds of functions for analyzing data, with many of them focused on specific financial and statistical calculations. Is it even remotely possible that you can use SQL Server to replace the complex calculations being done in a spreadsheet? Westclintech has developed a library of functions that match or exceed the functionality of Excel’s functions and contains many functions that are not available in EXCEL. Their XLeratorDB library of functions contains over 700 functions that can be incorporated into T-SQL statements. XLeratorDB takes advantage of the SQL CLR architecture introduced in SQL Server 2005. SQL CLR permits managed code to be compiled into the database and run alongside built-in SQL Server functions like COUNT or SUM. The Westclintech developers have taken advantage of this architecture to bring robust analytical functions to the database. In our hypothetical spreadsheet, let’s assume that our users are using the YIELD function and that the data are extracted from a table in our database called BONDS. Here’s what the spreadsheet might look like. We go to column G and see that it contains the following formula. Obviously, SQL Server does not offer a native YIELD function. However, with XLeratorDB we can replicate this calculation in SQL Server with the following statement: SELECT *, wct.YIELD(CAST(GETDATE() AS date),Maturity,Rate,Price,100,Frequency,Basis) AS YIELD FROM BONDS This produces the following result. This illustrates one of the best features about XLeratorDB; it is so easy to use. Since I knew that the spreadsheet was using the YIELD function I could use the same function with the same calling structure to do the calculation in SQL Server. I didn’t need to know anything at all about the mechanics of calculating the yield on a bond. It was pretty close to cut and paste. In fact, that’s one way to construct the SQL. Just copy the function call from the cell in the spreadsheet and paste it into SMS and change the cell references to column names. I built the SQL for this query by starting with this. SELECT * ,YIELD(TODAY(),B2,C2,D2,100,E2,F2) FROM BONDS I then changed the cell references to column names. SELECT * --,YIELD(TODAY(),B2,C2,D2,100,E2,F2) ,YIELD(TODAY(),Maturity,Rate,Price,100,Frequency,Basis) FROM BONDS Finally, I replicated the TODAY() function using GETDATE() and added the schema name to the function name. SELECT * --,YIELD(TODAY(),B2,C2,D2,100,E2,F2) --,YIELD(TODAY(),Maturity,Rate,Price,100,Frequency,Basis) ,wct.YIELD(GETDATE(),Maturity,Rate,Price,100,Frequency,Basis) FROM BONDS Then I am able to execute the statement returning the results seen above. The XLeratorDB libraries are heavy on financial, statistical, and mathematical functions. Where there is an analog to an Excel function, the XLeratorDB function uses the same naming conventions and calling structure as the Excel function, but there are also hundreds of additional functions for SQL Server that are not found in Excel. You can find the functions by opening Object Explorer in SQL Server Management Studio (SSMS) and expanding the Programmability folder under the database where the functions have been installed. The  Functions folder expands to show 3 sub-folders: Table-valued Functions; Scalar-valued functions, Aggregate Functions, and System Functions. You can expand any of the first three folders to see the XLeratorDB functions. Since the wct.YIELD function is a scalar function, we will open the Scalar-valued Functions folder, scroll down to the wct.YIELD function and and click the plus sign (+) to display the input parameters. The functions are also Intellisense-enabled, with the input parameters displayed directly in the query tab. The Westclintech website contains documentation for all the functions including examples that can be copied directly into a query window and executed. There are also more one hundred articles on the site which go into more detail about how some of the functions work and demonstrate some of the extensive business processes that can be done in SQL Server using XLeratorDB functions and some T-SQL. XLeratorDB is organized into libraries: finance, statistics; math; strings; engineering; and financial options. There is also a windowing library for SQL Server 2005, 2008, and 2012 which provides functions for calculating things like running and moving averages (which were introduced in SQL Server 2012), FIFO inventory calculations, financial ratios and more, without having to use triangular joins. To get started you can download the XLeratorDB 15-day free trial from the Westclintech web site. It is a fully-functioning, unrestricted version of the software. If you need more than 15 days to evaluate the software, you can simply download another 15-day free trial. XLeratorDB is an easy and cost-effective way to start adding sophisticated data analysis to your SQL Server database without having to know anything more than T-SQL. Get XLeratorDB Today and Now! Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Excel

    Read the article

  • NDepend 4 – First Steps

    - by Ricardo Peres
    Introduction Thanks to Patrick Smacchia I had the chance to test NDepend 4. I can only say: awesome! This will be the first of a series of posts on NDepend, where I will talk about my discoveries. Keep in mind that I am just starting to use it, so more experienced users may find these too basic, I just hope I don’t say anything foolish! I must say that I am in no way affiliated with NDepend and I never actually met Patrick. Installation No installation program – a curious decision, I’m not against it -, just unzip the files to a folder and run the executable. It will optionally register itself with Visual Studio 2008, 2010 and 11 as well as RedGate’s Reflector; also, it automatically looks for updates. NDepend can either be used as a stand-alone program (with or without a GUI) or from within Visual Studio or Reflector. Getting Started One thing that really pleases me is the Getting Started section of the stand-alone, with links to pages on NDepend’s web site, featuring detailed explanations, which usually include screenshots and small videos (<5 minutes). There’s also an How do I with hierarchical navigation that guides us to through the major features so that we can easily find what we want. Usage There are two basic ways to use NDepend: Analyze .NET solutions, projects or assemblies; Compare two versions of the same assembly. I have so far not used NDepend to compare assemblies, so I will first talk about the first option. After selecting a solution and some of its projects, it generates a single HTML page with an highly detailed report of the analysis it produced. This includes some metrics such as number of lines of code, IL instructions, comments, types, methods and properties, the calculation of the cyclomatic complexity, coupling and lots of others indicators, typically grouped by type, namespace and assembly. The HTML also includes some nice diagrams depicting assembly dependencies, type and method relative proportions (according to the number of IL instructions, I guess) and assembly analysis relating to abstractness and stability. Useful, I would say. Then there’s the rules; NDepend tests the target assemblies against a set of more than 120 rules, grouped in categories Code Quality, Object Oriented Design, Design, Architecture and Layering, Dead Code, Visibility, Naming Conventions, Source Files Organization and .NET Framework Usage. The full list can be configured on the application, and an explanation of each rule can be found on the web site. Rules can be validated, violated and violated in a critical manner, and the HTML will contain the violated rules, their queries – more on this later - and results. The HTML uses some nice JavaScript effects, which allow paging and sorting of tables, so its nice to use. Similar to the rules, there are some queries that display results for a number (about 200) questions grouped as Object Oriented Design, API Breaking Changes (for assembly version comparison), Code Diff Summary (also for version comparison) and Dead Code. The difference between queries and rules is that queries are not classified as passes, violated or critically violated, just present results. The queries and rules are expressed through CQLinq, which is a very powerful LINQ derivative specific to code analysis. All of the included rules and queries can be enabled or disabled and new ones can be added, with intellisense to help. Besides the HTML report file, the NDepend application can be used to explore all analysis results, compare different versions of analysis reports and to run custom queries. Comparison to Other Analysis Tools Unlike StyleCop, NDepend only works with assemblies, not source code, so you can’t expect it to be able to enforce brackets placement, for example. It is more similar to FxCop, but you don’t have the option to analyze at the IL level, that is, other that the number of IL instructions and the complexity. What’s Next In the next days I’ll continue my exploration with a real-life test case. References The NDepend web site is http://www.ndepend.com/. Patrick keeps an updated blog on http://codebetter.com/patricksmacchia/ and he regularly monitors StackOverflow for questions tagged NDepend, which you can find on http://stackoverflow.com/questions/tagged/ndepend. The default list of CQLinq rules, queries and statistics can be found at http://www.ndepend.com/DefaultRules/webframe.html. The syntax itself is described at http://www.ndepend.com/Doc_CQLinq_Syntax.aspx and its features at http://www.ndepend.com/Doc_CQLinq_Features.aspx.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >