Search Results

Search found 315 results on 13 pages for 'louis russell'.

Page 13/13 | < Previous Page | 9 10 11 12 13 

  • Yes, I did it - Skydiving in Mauritius

    Finally, I did it or better said we did it. Already back in November last year I saw the big billboard advertisement of Skydive Austral Mauritius near Caudan Waterfront in Port Louis and decided for myself that this is going to be the perfect birthday gift for my wife. Simply out of curiosity I would join her tandem jump with a second instructor. Due to her pregnancy of our son I had to be patient... But then finally, her birthday had arrived and on our midnight celebration session I showed her her netbook with the website preloaded. Actually, it was the "perfect" timing... Recovery from her cesarean is fine, local weather conditions are gorgious and the children were under surveillance of my mum - spending her annual holidays on the island. So, after late wake-up in the morning, we packed our stuff and off we went. According to Google Maps direction indication we had to drive for roughly 50km (only) but traffic here in Mauritius is always challenging. The dropzone is at the Zone Industrielle Mon Loisir Sugar Estate near Riviere du Rempart at the northern east coast. Anyways, we were not in a hurry and arrived there shortly after noon. The access road to the airfield are just small down-driven paths through sugar cane fields and according to our daughter "it's bumpy!". True true true... The facilities at Skydive Austral Mauritius are complete except for food. Enough space for parking, easy handling at the reception and a lot to see for the kids. There's even a big terrace with several sets of tables and chairs, small bar for soft drinks, strictly non-alcoholic. The team over there is all welcoming and warm-hearthy! Having the kids with us was no issue at all. Quite the opposite, our daugther was allowed to discover a lot of things than we adults did. Even visiting the small air plane was on the menu for her. Really great stuff! While waiting for our turn we enjoyed watching other people getting ready in the jump gear, taking off with the Cessna, and finally coming back down on the tandem parachute. Actually, the different expressions on their faces was one of the best parts while waiting. Great mental preparation as my wife was getting more anxious about her first jump... {loadposition content_adsense} First, we got some information about the procedures on the plane about how to get seated, tight up with our instructors and how to get ready for the jump off the plane as soon as we arrive the height of 10.000 ft. All well explained and easy to understand after all.Next, we met with our jumpers Chris and Lee aka "Rasta" to get dressed and ready for take-off. Those guys are really cool and relaxed for their job. From that point on, the DVD session / recording for my wife's birthday started and we really had a lot of fun... The difference between that small Cessna and a commercial flight with an Airbus or a Boeing is astronomic! The climb up to 10.000 ft took us roughly 25 minutes and we enjoyed the magnificent view over the turquoise lagunes near Poste de Flacq, Lafayette and Isle d'Ambre on the north-east coast. After flying through the clouds we sun-bathed and looked over "iced-sugar covered" Mauritius. You might have a look at the picture gallery of Skydive Mauritius for better imagination. The moment of truth, or better said, point of no return came after approximately 25 minutes. The door opens, moving into position on the side on top of the wheel and... out! Back flip and free fall! Slight turns and Wooooohooooo! through the clouds... It so amazing and breath-taking! So undescribable! You have to experience this yourself! Some seconds later the parachute opened and we glided smoothly with some turns and spins back down to the dropzone. The rest of the family could hear and see us soon and the landing was easy going. We never had any doubts or fear about our instructors. They did a great job and we are looking forward to book our next job. I might even consider to follow educational classes on skydiving and earn a license. By the way, feel free to get in touch with Skydive Austral Mauritius. Either via contact details on their website or tweeting a little bit with them. Follow the tweets of Chris and fellows on SkydiveAustral.

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • Hide subdomain AND subdirectory using mod_rewrite?

    - by Jeremy
    I am trying to hide a subdomain and subdirectory from users. I know it may be easier to use a virtual host but will that not change direct links pointing at our site? The site currently resides at http://mail.ctrc.sk.ca/cms/ I want www.ctrc.sk.ca and ctrc.sk.ca to access this folder but still display www.ctrc.sk.ca. If that makes any sense. Here is what our current .htaccess file looks like, we are using Joomla so there already a few rules set up. Help is appreciated. # Helicon ISAPI_Rewrite configuration file # Version 3.1.0.78 ## # @version $Id: htaccess.txt 14401 2010-01-26 14:10:00Z louis $ # @package Joomla # @copyright Copyright (C) 2005 - 2010 Open Source Matters. All rights reserved. # @license http://www.gnu.org/copyleft/gpl.html GNU/GPL # Joomla! is Free Software ## ##################################################### # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. # ##################################################### ## Can be commented out if causes errors, see notes above. #Options +FollowSymLinks # # mod_rewrite in use RewriteEngine On ########## Begin - Rewrite rules to block out some common exploits ## If you experience problems on your site block out the operations listed below ## This attempts to block the most common type of exploit `attempts` to Joomla! # ## Deny access to extension xml files (uncomment out to activate) #<Files ~ "\.xml$"> #Order allow,deny #Deny from all #Satisfy all #</Files> ## End of deny access to extension xml files RewriteCond %{QUERY_STRING} mosConfig_[a-zA-Z_]{1,21}(=|\%3D) [OR] # Block out any script trying to base64_encode crap to send via URL RewriteCond %{QUERY_STRING} base64_encode.*\(.*\) [OR] # Block out any script that includes a <script> tag in URL RewriteCond %{QUERY_STRING} (\<|%3C).*script.*(\>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Send all blocked request to homepage with 403 Forbidden error! RewriteRule ^(.*)$ index.php [F,L] # ########## End - Rewrite rules to block out some common exploits # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root) #RewriteBase / ########## Begin - Joomla! core SEF Section # RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^/index.php RewriteCond %{REQUEST_URI} (/|\.php|\.html|\.htm|\.feed|\.pdf|\.raw|/[^.]*)$ [NC] RewriteRule (.*) index.php RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L] # ########## End - Joomla! core SEF Section EDIT Yes, mail.ctrc.sk.ca/cms/ is the root directory. Currently the DNS redirects from ctrc.sk.ca and www.ctrc.sk.ca to mail.ctrc.sk.ca/cms. However when it redirects the user still sees the mail.ctrc.sk.ca/cms/ url and I want them to only see www.ctrc.sk.ca.

    Read the article

  • unmet dependencies in Ubuntu 12.04

    - by lee.O
    I tried today to install a dvb-card on my Ubuntu 12.04 (Linux blauhai-linux 3.2.0-25-generic #40-Ubuntu SMP Wed May 23 20:30:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux ). The installation failed with an error. After that, i tried to install python (it was already installed but i got this error): linux:~$ sudo apt-get install git Reading package lists... Done Building dependency tree Reading state information... Done git is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: python-glade2:i386 : Depends: python:i386 (< 2.5) but it is not going to be installed Depends: python-support:i386 (= 0.3.4) but it is not installable Depends: python:i386 (= 2.4) but it is not going to be installed Depends: libglade2-0:i386 (= 1:2.5.1) but it is not going to be installed Depends: python-gtk2:i386 (= 2.8.6-8) but it is not going to be installed python-numeric:i386 : Depends: python:i386 (< 2.5) but it is not going to be installed Depends: python:i386 (= 2.3) but it is not going to be installed Depends: python-central:i386 (= 0.5.7) but it is not installable E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). well, i can read and tried the proposed command, but then i get this: linux:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libopenal1:i386 libsdl-ttf2.0-0:i386 libkrb5-3:i386 libgconf-2-4:i386 libsm-dev libatk1.0-0:i386 libk5crypto3:i386 libstdc++5:i386 libqt4-declarative:i386 libxcomposite1:i386 libice-dev libgail18:i386 libldap-2.4-2:i386 libao-common libv4l-0:i386 liblcms1:i386 libqt4-qt3support:i386 libroken18-heimdal:i386 libunistring0:i386 libcupsimage2:i386 libgphoto2-port0:i386 libidn11:i386 libnss3:i386 libcaca0:i386 gtk2-engines:i386 libgudev-1.0-0:i386 libjpeg-turbo8:i386 libpthread-stubs0 libcairo-gobject2:i386 libavc1394-0:i386 libjpeg8:i386 libotr2 libaio1:i386 libsane:i386 odbcinst1debian2 odbcinst1debian2:i386 libqt4-test:i386 libqt4-script:i386 libqt4-designer:i386 libsdl-mixer1.2:i386 libqt4-network:i386 libqt4-dbus:i386 libcap2:i386 libproxy1:i386 ibus-gtk:i386 libdbus-glib-1-2:i386 libtdb1:i386 libasn1-8-heimdal:i386 libspeex1:i386 libxslt1.1:i386 libgomp1:i386 libcapi20-3:i386 libibus-1.0-0:i386 libcairo2:i386 libgnutls26:i386 libopenal-data odbcinst libgssapi3-heimdal:i386 libcanberra0:i386 libtasn1-3:i386 libfreetype6:i386 x11proto-kb-dev gtk2-engines-murrine:i386 libwavpack1:i386 libqt4-opengl:i386 libsoup-gnome2.4-1:i386 libv4lconvert0:i386 gstreamer0.10-plugins-good:i386 libc6-i386 lib32gcc1 libqt4-xmlpatterns:i386 librsvg2-common:i386 libdatrie1:i386 xtrans-dev libavahi-common-data:i386 libiec61883-0:i386 lib32asound2 libgdk-pixbuf2.0-0:i386 libsdl-image1.2:i386 libp11-kit0:i386 x11proto-input-dev libwind0-heimdal:i386 libpixman-1-0:i386 libsdl1.2debian:i386 libxaw7:i386 libgdbm3:i386 libcups2:i386 libcurl3:i386 libqtcore4:i386 libxinerama1:i386 libesd0:i386 libmikmod2:i386 libkrb5support0:i386 libxft2:i386 libxt-dev libcroco3:i386 libpulse-mainloop-glib0:i386 libice6:i386 libaa1:i386 libieee1284-3:i386 libgcrypt11:i386 libthai0:i386 libao4:i386 libkeyutils1:i386 libxmu6:i386 libcanberra-gtk0:i386 libvorbisfile3:i386 libqt4-sql:i386 esound-common libxpm4:i386 libqt4-svg:i386 libusb-0.1-4:i386 libgail-common:i386 libxrender1:i386 libhcrypto4-heimdal:i386 libraw1394-11:i386 libnspr4:i386 libshout3:i386 libdv4:i386 libhx509-5-heimdal:i386 libxau-dev libqt4-xml:i386 gstreamer0.10-x:i386 libgettextpo0:i386 libxss1:i386 libgd2-xpm:i386 libheimbase1-heimdal:i386 libtiff4:i386 libsdl-net1.2:i386 libjasper1:i386 libgnome-keyring0:i386 libxtst6:i386 gtk2-engines-pixbuf:i386 libqtgui4:i386 libtag1c2a:i386 librsvg2-2:i386 libavahi-client3:i386 libssl0.9.8:i386 libmpg123-0:i386 libmad0:i386 libsasl2-2:i386 xorg-sgml-doctools libgsoap1 gtk2-engines-oxygen:i386 libfontconfig1:i386 xaw3dg:i386 libpango1.0-0:i386 libsm6:i386 libx11-dev libheimntlm0-heimdal:i386 libpulsedsp:i386 lib32stdc++6 libx11-doc libqt4-sql-mysql:i386 libxcb-render0:i386 libodbc1:i386 libexif12:i386 libqt4-scripttools:i386 librtmp0:i386 libgssapi-krb5-2:i386 libxi6:i386 libqtwebkit4:i386 libxcb1-dev libxp6:i386 libaudio2:i386 libxcursor1:i386 libxcb-shm0:i386 libxt6:i386 libxv1:i386 libsasl2-modules:i386 libavahi-common3:i386 libxrandr2:i386 x11proto-core-dev libsqlite3-0:i386 libmng1:i386 libgtk2.0-0:i386 libxdmcp-dev libpthread-stubs0-dev libltdl7:i386 libkrb5-26-heimdal:i386 libssl1.0.0:i386 glib-networking:i386 libgpg-error0:i386 libsoup2.4-1:i386 libgphoto2-2:i386 libtag1-vanilla:i386 libaudiofile1:i386 libglade2-0:i386 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: default-jre default-jre-headless icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-netx icedtea-netx-common libglade2-0:i386 libpython3.2 openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib python3 python3-minimal python3-uno python3.2 python3.2-minimal Suggested packages: icedtea-plugin sun-java6-fonts fonts-ipafont-gothic fonts-ipafont-mincho ttf-telugu-fonts ttf-oriya-fonts ttf-kannada-fonts ttf-bengali-fonts python3-doc python3-tk python3.2-doc binfmt-support The following packages will be REMOVED: activity-log-manager-control-center aisleriot alacarte apparmor apport apport-gtk apt-xapian-index aptdaemon apturl apturl-common bluez bluez-alsa bluez-alsa:i386 bluez-gstreamer checkbox checkbox-qt command-not-found compiz compiz-gnome compiz-plugins-main-default compizconfig-backend-gconf deja-dup duplicity eog evolution-data-server firefox firefox-globalmenu firefox-gnome-support foomatic-db-compressed-ppds gconf-editor gconf2 gdb gedit gir1.2-mutter-3.0 gir1.2-peas-1.0 gir1.2-rb-3.0 gir1.2-totem-1.0 gir1.2-ubuntuoneui-3.0 gksu gnome-applets gnome-applets-data gnome-bluetooth gnome-contacts gnome-control-center gnome-media gnome-menus gnome-orca gnome-panel gnome-panel-data gnome-session-fallback gnome-shell gnome-sudoku gnome-terminal gnome-terminal-data gnome-themes-standard gnome-tweak-tool gnome-user-share gstreamer0.10-gconf gwibber gwibber-service gwibber-service-facebook gwibber-service-identica gwibber-service-twitter hplip hplip-data ia32-libs ia32-libs-multiarch:i386 ibus ibus-pinyin ibus-table indicator-datetime indicator-power jockey-common jockey-gtk landscape-client-ui-install language-selector-common language-selector-gnome launchpad-integration libcanberra-gtk-module libcanberra-gtk-module:i386 libcanberra-gtk3-module libcompizconfig0 libfolks-eds25 libgksu2-0 libgnome-media-profiles-3.0-0 libgnome2-0 libgnome2-common libgnomevfs2-0 libgnomevfs2-common libgweather-3-0 libgweather-common libgwibber-gtk2 libgwibber2 libmetacity-private0 libmutter0 libpeas-1.0-0 libpurple-bin libpython2.7 libreoffice-gnome librhythmbox-core5 libsyncdaemon-1.0-1 libtotem0 libubuntuoneui-3.0-1 light-themes lsb-release metacity metacity-common mutter-common nautilus-dropbox nautilus-share network-manager-gnome nvidia-common nvidia-settings nvidia-settings-updates onboard oneconf openjdk-7-jdk openjdk-7-jre openprinting-ppds pidgin pidgin-libnotify pidgin-otr printer-driver-foo2zjs printer-driver-ptouch printer-driver-pxljr printer-driver-sag-gdi printer-driver-splix python python-appindicator python-apport python-apt python-apt-common python-aptdaemon python-aptdaemon.gtk3widgets python-aptdaemon.pkcompat python-brlapi python-cairo python-central python-chardet python-configglue python-crypto python-cups python-cupshelpers python-dateutil python-dbus python-debian python-debtagshw python-defer python-dirspec python-egenix-mxdatetime python-egenix-mxtools python-gconf python-gdbm python-gi python-gi-cairo python-glade2:i386 python-gmenu python-gnomekeyring python-gnupginterface python-gobject python-gobject-2 python-gpgme python-gst0.10 python-gtk2 python-httplib2 python-ibus python-imaging python-keyring python-launchpadlib python-lazr.restfulclient python-lazr.uri python-libproxy python-libxml2 python-louis python-mako python-markupsafe python-minimal python-notify python-numeric:i386 python-oauth python-openssl python-packagekit python-pam python-pexpect python-piston-mini-client python-pkg-resources python-problem-report python-protobuf python-pyatspi2 python-pycurl python-pyinotify python-renderpm python-reportlab python-reportlab-accel python-serial python-simplejson python-smbc python-software-properties python-speechd python-twisted-bin python-twisted-core python-twisted-names python-twisted-web python-ubuntu-sso-client python-ubuntuone-client python-ubuntuone-control-panel python-ubuntuone-storageprotocol python-uno python-virtkey python-wadllib python-xapian python-xdg python-xkit python-zeitgeist python-zope.interface python2.7 python2.7-minimal rhythmbox rhythmbox-mozilla rhythmbox-plugin-cdrecorder rhythmbox-plugin-magnatune rhythmbox-plugin-zeitgeist rhythmbox-plugins rhythmbox-ubuntuone screen-resolution-extra sessioninstaller skype software-center software-center-aptdaemon-plugins software-properties-common software-properties-gtk system-config-printer-common system-config-printer-gnome system-config-printer-udev texlive-extra-utils totem totem-mozilla totem-plugins ubuntu-artwork ubuntu-desktop ubuntu-minimal ubuntu-sso-client ubuntu-sso-client-gtk ubuntu-standard ubuntu-system-service ubuntuone-client ubuntuone-client-gnome ubuntuone-control-panel ubuntuone-couch ubuntuone-installer ufw unattended-upgrades unity unity-2d unity-common unity-lens-applications unity-lens-video unity-scope-musicstores unity-scope-video-remote update-manager update-manager-core update-notifier update-notifier-common usb-creator-common usb-creator-gtk virtualbox virtualbox-dkms virtualbox-qt xdiagnose xul-ext-ubufox zeitgeist zeitgeist-core zeitgeist-datahub The following NEW packages will be installed: default-jre default-jre-headless icedtea-6-jre-cacao icedtea-6-jre-jamvm icedtea-netx icedtea-netx-common libglade2-0:i386 libpython3.2 openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib python3 python3-minimal python3-uno python3.2 python3.2-minimal WARNING: The following essential packages will be removed. This should NOT be done unless you know exactly what you are doing! python-minimal python2.7-minimal (due to python-minimal) 0 upgraded, 16 newly installed, 273 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 39.1 MB of archives. After this operation, 324 MB disk space will be freed. You are about to do something potentially harmful. To continue type in the phrase 'Yes, do as I say!' ?] Thats not good, is it?! Should i run this command or should i run another command to fix this problem? Would be great if somebody can help me. :) Thanks in advance. best regards

    Read the article

  • Visiting the Emtel Data Centre

    Back in February at the first event of the Emtel Knowledge Series (EKS) I spoke to various people at Emtel about their data centre here on the island. I was trying to see whether it would be possible to arrange a meeting over there for a selected group of our community members. Well, let's say it like this... My first approach wasn't that promising and far from successful but during the following months there were more and more occasions to get in touch with the "right" contact persons at Emtel to make it happen... Setting up an appointment and pre-requisites The major improvement came during a Boot Camp for Windows Phone 8.1 App development organised by Microsoft Indian Ocean Islands in cooperation with Emtel at the Emtel World, Ebene. Apart from learning bits and pieces regarding Universal Apps I took the opportunity to get in touch with Arvin Lockee, Sales Executive - Data, during our lunch break. And this really kicked off the whole procedure. Prior to get access to the Emtel data centre it is requested that you provide full name and National ID of anyone going to visit. Also, it should be noted that there was only a limited amount of seats available. Anyways, packed with this information I posted through the usual social media channels. Responses came in very quickly and based on First-come, first-serve (FCFS) principle I noted down the details and forwarded them to Emtel in order to fix a date and time for the visit. In preparation on our side, all attendees exchanged contact details and we organised transport options to go to the data centre in Arsenal. The day before and on the day of our meeting, Arvin send me a reminder to check whether everything is still confirmed and ready to go... Of course, it was! Arriving at the Emtel Data Centre As I'm coming from Flic En Flac towards the North, we agreed that I'm going to pick up a couple of young fellows near the old post office in Port Louis. All went well, except that Sean eventually might be living in another time zone compared to the rest of us. Anyway, after some extended stop we were complete and arrived just in time in Arsenal to meet and greet with Ish and Veer. Again, Emtel is taking access procedures to their data centre very serious and the gate stayed close until all our IDs had been noted and compared to the list of registered attendees. Despite having a good laugh at the mixture of old and new ID cards it was a straight-forward processing. The ward was very helpful and guided us to the waiting area at the entrance section of the building. Shortly after we were welcomed by Kamlesh Bokhoree, the Data Centre Officer. He gave us brief introduction into the rules and regulations during our visit, like no photography allowed, not touching the buttons, and following his instructions through the whole visit. Of course! Inside the data centre Next, he explained us the multi-factor authentication system using a combination of bio-metric data, like finger print reader, and "classic" pin panel. The Emtel data centre provides multiple services and next to co-location for your own hardware they also offer storage options for your backup and archive data in their massive, fire-resistant vault. Very impressive to get to know about the considerations that have been done in choosing the right location and how to set up the whole premises. It should also be noted that there is 24/7 CCTV surveillance inside and outside the buildings. Strengths of the Emtel TIER 3 Data Centre, Mauritius Finally, we were guided into the first server room. And wow, the whole setup is cleverly planned and outlined in the architecture. From the false floor and ceilings in order to provide optimum air flow, over to the separation of cold and hot aisles between the full-size server racks, and of course the monitored air conditions in order to analyse and watch changes in temperature, smoke detection and other parameters. And not surprisingly everything has been implemented in two independent circuits. There is a standardised classification for the construction and operation of data centres world-wide, and the Emtel's one has been designed to be a TIER 4 building but due to the lack of an alternative power supplier on the island it is officially registered as a TIER 3 compliant data centre. Maybe in the long run there might be a second supplier of energy next to CEB... time will tell. Luckily, the data centre is integrated into the National Fibre Optic Gigabit Ring and Emtel already connects internationally through diverse undersea cable routes like SAFE & LION/LION2 out of Mauritius and through several other providers for onwards connectivity. The data centre is part of the National Fibre Optic Gigabit Ring and has redundant internet connectivity onwards. Meanwhile, Arvin managed to join our little group of geeks and he supported Kamlesh in answering our technical questions regarding the capacities and general operation of the data centre. Visiting the NOC and its dedicated team of IT professionals was surely one of the visual highlights. Seeing their wall of screens to monitor any kind of activities on the data lines, the managed servers and the activity in and around the building was great. Even though I'm using a multi-head setup since years I cannot keep it up with that setup... ;-) But I got a couple of ideas on how to improve my work spaces here at the office. Clear advantages of hosting your e-commerce and mobile backends locally After the completely isolated NOC area we continued our Q&A session with Kamlesh and Arvin in the second server room which is dedictated to shared environments. On first thought it should be well-noted that there is lots of space for full-sized racks and therefore co-location of your own hardware. Actually, given the feedback that there will be upcoming changes in prices the facilities at the Emtel data centre are getting more and more competitive and interesting for local companies, especially small and medium enterprises. After seeing this world-class infrastructure available on the island, I'm already considering of moving one of my root servers abroad to be co-located here on the island. This would provide an improved user experience in terms of site performance and latency. This would be a good improvement, especially for upcoming e-commerce solutions for two of my local clients. Later on, we actually started the conversation of additional services that could be a catalyst for the local market in order to attract more small and medium companies to take the data centre into their evaluations regarding online activities. Until today Emtel does not provide virtualised server environments but there might be ongoing plans in the future to cover this field as well. Emtel is a mobile operator and internet connectivity provider in the first place, entering a market of managed and virtualised server infrastructures including capacities in terms of cloud storage and computing are rather new and there is a continuous learning curve at Emtel, too. You cannot just jump into a new market and see how it works out... And I appreciate Emtel's approach towards a solid fundament and then building new services on top of that. Emtel as a future one-stop-shop service provider for all your internet and telecommunications needs. Emtel's promotional video about their TIER 3 data centre in Arsenal, Mauritius More details are thoroughly described in Emtel's brochure of their data centre. Check out their PDF document here. Thanks for this opportunity Visiting and walking through the Emtel data centre for more than 2 hours was a great experience. As representative of the Mauritius Software Craftsmanship Community (MSCC) I would like to thank anyone at Emtel involved in the process of making it happen, and especially to Arvin Lockee and Kamlesh Bokhoree for their time and patience in explaining the infrastructure and answering all the endless questions from our members. Thank You!

    Read the article

  • And What's Your Brand Worth? ...anything?

    - by [email protected]
    100 Best Global Brands from Business Week Story: The Great Trust Offensive Slide Show: Top Brands 2009 Methodology: Picking the Winners The recession has presented marketing executives around the world with the toughest test of their careers. Some brands have prospered amid the hard times--or at least held their own. Others have slipped a surprising number of places on our ninth annual ranking, compiled by consultancy Interbrand. But for seven brands, impressive performances saw them race up the charts to take their place on this year's list. Here are the numbers behind the rankings Rank 2009 Rank 2008 Employer 2009 Brand value($millions) 2008 Brand value($millions) Percent change(%) Country of Ownership 1 1 Coca-Cola  68,734  66,667  3 U.S. 2 2 IBM  60,211  59,031  2 U.S. 3 3 Microsoft  56,647  59,007  -4 U.S. 4 4 GE  47,777  53,086  -10 U.S. 5 5 Nokia  34,864  35,942  -3 Finland 6 8 McDonald's  32,275  31,049  4 U.S. 7 10 Google  31,980  25,590  25 U.S. 8 6 Toyota  31,330  34,050  -8 Japan 9 7 Intel  30,636  31,261  -2 U.S. 10 9 Disney  28,447  29,251  -3 U.S. 11 12 Hewlett-Packard  24,096  23,509  2 U.S. 12 11 Mercedes-Benz  23,867  25,577  -7 Germany 13 14 Gillette  22,841  22,069  4 U.S. 14 17 Cisco  22,030  21,306  3 U.S. 15 13 BMW  21,671  23,298  -7 Germany 16 16 Louis Vuitton  21,120  21,602  -2 France 17 18 Marlboro  19,010  21,300  -11 U.S. 18 20 Honda  17,803  19,079  -7 Japan 19 21 Samsung  17,518  17,689  -1 S. Korea 20 24 Apple  15,443  13,724  12 U.S. 21 22 H&M  15,375  13,840  11 Sweden 22 15 American Express  14,971  21,940  -32 U.S. 23 26 Pepsi  13,706  13,249  3 U.S. 24 23 Oracle  13,699  13,831  -1 U.S. 25 28 Nescafe  13,317  13,055  2 Switzerland 26 29 Nike  13,179  12,672  4 U.S. 27 31 SAP  12,106  12,228  -1 Germany 28 35 Ikea  12,004  10,913  10 Sweden 29 25 Sony  11,953  13,583  -12 Japan 30 33 Budweiser  11,833  11,438  3 Belgium 31 30 UPS  11,594  12,621  -8 U.S. 32 27 HSBC  10,510  13,143  -20 Britain 33 36 Canon  10,441  10,876  -4 Japan 34 39 Kellogg's  10,428  9,710  7 U.S. 35 32 Dell  10,291  11,695  -12 U.S. 36 19 Citi  10,254  20,174  -49 U.S. 37 37 JPMorgan  9,550  10,773  -11 U.S. 38 38 Goldman Sachs  9,248  10,331  -10 U.S. 39 40 Nintendo  9,210  8,772  5 Japan 40 44 Thomson Reuters  8,434  8,313  1 Canada 41 45 Gucci  8,182  8,254  -1 Italy 42 43 Philips  8,121  8,325  -2 Netherlands 43 58 Amazon  7,858  6,434  22 U.S. 44 51 L'Oreal  7,748  7,508  3 France 45 47 Accenture  7,710  7,948  -3 U.S. 46 46 eBay  7,350  7,991  -8 U.S. 47 48 Siemens  7,308  7,943  -8 Germany 48 56 Heinz  7,244  6,646  9 U.S. 49 49 Ford  7,005  7,896  -11 U.S. 50 62 Zara  6,789  5,955  14 Spain   Valuations do not represent a guarantee of future performance of the brands or companies. Data: Interbrand, BusinessWeek

    Read the article

  • XBRL US Conference Highlights

    - by john.orourke(at)oracle.com
    Back in early November I had an opportunity to attend the XBRL US National Conference in Philadelphia.  At the event, XBRL US announced that Oracle had joined the initiative, so I had a chance to participate in a press conference and attend a number of sessions.  Oracle joined XBRL US so we can stay ahead of the standard and leverage it in our products, and to help drive awareness with customers and improve adoption of XBRL. There were roughly 250 attendees at the event, about half of which were vendors and consultants and the rest financial reporting staff from corporate filers.  Event sponsors included Ernst & Young, SWIFT and Fujitsu.  There were also a number of XBRL technology and service providers exhibiting at the conference.  On Monday Nov. 8th, the XBRL US Steering Committee meetings and Annual Members meeting and reception were held.  At the Annual Members meeting the big news was that current XBRL US President, Mark Bolgiano, is moving to a new position at Howard Hughes Medical Center.  Campbell Pryde, who had led the Taxonomy Development for XBRL US, is taking over as XBRL US President. Other items that were highlighted at the members meeting included: The US GAAP XBRL taxonomy is being used by over 1500 SEC filers and has now been handed over to the FASB to maintain and enhance 16 filer training events were held in 2010 XBRL Global Magazine was launched Corporate Actions proposal was submitted to the SEC with SWIFT in May XBRL Labs for iPhone, XBRL US Consistency Suite launched ISO 2022 Corporate Actions Alignment with XBRL achieved The XBRL Credit Rating taxonomy was accepted Tuesday Nov. 9th included Keynotes, General Sessions, Innovation Workshop for Governments and Securities Professionals, and an Opening Reception.  General sessions included: Lessons Learned from the SEC's rollout of XBRL.  More than 18,000 errors were identified in reviews of filings between June 2009 and September 2010.  Most of these related to negative values being used where they shouldn't have.  Also, the SEC feels there are too many taxonomy extensions being created - mostly in the Cash Flow Statements.  They emphasize using existing elements in the US GAAP taxonomy and advise filers not to  create extensions to improve the visual formatting of XBRL filings. Investors and XBRL - Setting the Standard for Data Quality.  In this panel discussion, the key learning was that CFA's, academics and the financial community are not using XBRL as expected.  The issues raised include the  accuracy and completeness of filings, number of taxonomy extensions, and limited number of tools available to help analyze XBRL data.  Another big issue that was raised is the lack of historic results in XBRL - most analysts need 10 quarters of historic data.  On the positive side, XBRL has the potential to eliminate re-keying of data and errors here and can improve analytic capabilities for financial analysts once more historic data is available and more companies are providing detailed tagging of their filings. A US Roadmap for XBRL Financial Reporting.  This was a panel discussion featuring Jeff Neumann(SEC), Campbell Pryde(XBRL US), and Louis Matherne(FASB).  Key points included the fact that XBRL is currently used by 1500 companies, with 8000 more companies coming in 2011.  XBRL for Mutual Fund Reporting will start in 2011 for 8000 funds, and a Credit Rating Taxonomy has now been submitted for review.  The XBRL tagging/filing process is improving each quarter - more education is helping here.  The FASB is looking at extensions to date, and potential additions to US GAAP taxonomy, while the SEC is evaluating filings for accuracy, consistency in tagging, and tools for analyzing data.  The big news is that the FASB 2011 US GAAP Taxonomy has been completed and reviewed by SEC.  The 2011 US GAAP Taxonomy supports new FASB accounting standards issued since 2009, has new taxonomy elements for certain industries (i.e airlines) and the elimination of 500 concepts.  (meaning they can't be used going forward but are still supported for historical comparison)  The 2011 US GAAP Taxonomy will be available for usage with Q2 2011 SEC filings.  More information about this can be found on the FASB web site.  http://www.fasb.org/home Accounting Firms and XBRL.  This session covered the Role of Audit Firms, which includes awareness and education, validation of XBRL filings, and in-house transition planning.  The main advice provided was that organizations should document XBRL mapping process, perform peer comparisons, and risk assessments on a regular basis. Wednesday Nov. 10th included more Keynotes, General Sessions on Corporate Actions, and XBRL Essentials Workshop Training for corporate filers.  The XBRL Essentials Training included: Getting Started Once you Have the Basics Detailed Footnote Tagging and Handling Tables Quality Control and Trust in the XBRL Process Bringing XBRL In-House:  What are the Options, What should you consider? The US GAAP Financial Reporting Taxonomy - Overview of the 2011 release The XBRL Essentials Training was well-attended with about 80 people.  This included a good overview of the SEC's XBRL mandate, limited liability issue, tagging levels, recommended planning process, internal vs. outsourced approach, and how to manage service providers.  I learned a lot from the session on detailed tagging.  This is the requirement that kicks in during a company's second year of XBRL filing with the SEC and applies to financial statements, footnotes and disclosures (it does not apply to MD&A, executive communications and other information).  The review of the Linkbase model, or dimensional table structure, was very interesting and can be complex to understand.  The key takeaway here is that using dimensional tables in XBRL filings can help limit the number of taxonomy extensions that are required.  The slides from this session are posted on the XBRL US web site. (http://xbrl.us/events/Pages/archive.aspx) For me, the main summary points and takeaways from the XBRL US conference are: XBRL for financial reporting has turned the corner and gone mainstream - with 1500 companies currently using it and 8000 more coming in 2011 The expected value is not being achieved by filers or consumers of XBRL data - this will improve when more companies are filing in XBRL, more history is available, and more software tools are available for analysis (hmm, sounds like an opportunity for Oracle) XBRL is becoming the global standard for all business communications beyond just the financials - i.e. adoption for mutual funds, corporate actions and others planned for the future If you would like to learn more about XBRL and the various training programs, services and software tools that are available check out the XBRL US web site and even better - become a member.  Here's a link:  http://xbrl.us/Pages/default.aspx

    Read the article

  • Tutorial: Getting Started with the NoSQL JavaScript / Node.js API for MySQL Cluster

    - by Mat Keep
    Tutorial authored by Craig Russell and JD Duncan  The MySQL Cluster team are working on a new NoSQL JavaScript connector for MySQL. The objectives are simplicity and high performance for JavaScript users: - allows end-to-end JavaScript development, from the browser to the server and now to the world's most popular open source database - native "NoSQL" access to the storage layer without going first through SQL transformations and parsing. Node.js is a complete web platform built around JavaScript designed to deliver millions of client connections on commodity hardware. With the MySQL NoSQL Connector for JavaScript, Node.js users can easily add data access and persistence to their web, cloud, social and mobile applications. While the initial implementation is designed to plug and play with Node.js, the actual implementation doesn't depend heavily on Node, potentially enabling wider platform support in the future. Implementation The architecture and user interface of this connector are very different from other MySQL connectors in a major way: it is an asynchronous interface that follows the event model built into Node.js. To make it as easy as possible, we decided to use a domain object model to store the data. This allows for users to query data from the database and have a fully-instantiated object to work with, instead of having to deal with rows and columns of the database. The domain object model can have any user behavior that is desired, with the NoSQL connector providing the data from the database. To make it as fast as possible, we use a direct connection from the user's address space to the database. This approach means that no SQL (pun intended) is needed to get to the data, and no SQL server is between the user and the data. The connector is being developed to be extensible to multiple underlying database technologies, including direct, native access to both the MySQL Cluster "ndb" and InnoDB storage engines. The connector integrates the MySQL Cluster native API library directly within the Node.js platform itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The following sections take you through how to connect to MySQL, query the data and how to get started. Connecting to the database A Session is the main user access path to the database. You can get a Session object directly from the connector using the openSession function: var nosql = require("mysql-js"); var dbProperties = {     "implementation" : "ndb",     "database" : "test" }; nosql.openSession(dbProperties, null, onSession); The openSession function calls back into the application upon creating a Session. The Session is then used to create, delete, update, and read objects. Reading data The Session can read data from the database in a number of ways. If you simply want the data from the database, you provide a table name and the key of the row that you want. For example, consider this schema: create table employee (   id int not null primary key,   name varchar(32),   salary float ) ENGINE=ndbcluster; Since the primary key is a number, you can provide the key as a number to the find function. function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find('employee', 0, onData); }; function onData = function(err, data) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(data));   ... use data in application }; If you want to have the data stored in your own domain model, you tell the connector which table your domain model uses, by specifying an annotation, and pass your domain model to the find function. var annotations = new nosql.Annotations(); function Employee = function(id, name, salary) {   this.id = id;   this.name = name;   this.salary = salary;   this.giveRaise = function(percent) {     this.salary *= percent;   } }; annotations.mapClass(Employee, {'table' : 'employee'}); function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData); }; Updating data You can update the emp instance in memory, but to make the raise persistent, you need to write it back to the database, using the update function. function onData = function(err, emp) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp); // oops, session is out of scope here }; Using JavaScript can be tricky because it does not have the concept of block scope for variables. You can create a closure to handle these variables, or use a feature of the connector to remember your variables. The connector api takes a fixed number of parameters and returns a fixed number of result parameters to the callback function. But the connector will keep track of variables for you and return them to the callback. So in the above example, change the onSession function to remember the session variable, and you can refer to it in the onData function: function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData, session); }; function onData = function(err, emp, session) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp, onUpdate); // session is now in scope }; function onUpdate = function(err, emp) {   if (err) {     console.log(err);     ... error handling   } Inserting data Inserting data requires a mapped JavaScript user function (constructor) and a session. Create a variable and persist it: function onSession = function(err, session) {   var data = new Employee(999, 'Mat Keep', 20000000);   session.persist(data, onInsert);   } }; Deleting data To remove data from the database, use the session remove function. You use an instance of the domain object to identify the row you want to remove. Only the key field is relevant. function onSession = function(err, session) {   var key = new Employee(999);   session.remove(Employee, onDelete);   } }; More extensive queries We are working on the implementation of more extensive queries along the lines of the criteria query api. Stay tuned. How to evaluate The MySQL Connector for JavaScript is available for download from labs.mysql.com. Select the build: MySQL-Cluster-NoSQL-Connector-for-Node-js You can also clone the project on GitHub Since it is still early in development, feedback is especially valuable (so don't hesitate to leave comments on this blog, or head to the MySQL Cluster forum). Try it out and see how easy (and fast) it is to integrate MySQL Cluster into your Node.js platforms. You can learn more about other previewed functionality of MySQL Cluster 7.3 here

    Read the article

  • apache-memory-hacker-linux

    - by bibhudatta
    When we start the linux system it take only 435mb memory and it is 4GB memory server. When we start the httpd services it take 1000mb and outmatically it take all the memory and the server crase. even we stop the apache just it release 200mb memory. What will be the problem Can any one tell me what these hacker are doing. I see they are goinging some hit to my apache by some but I thing they are doing from this system. Below is the log. Please help me out for this. [root@host ~]# tail -20 /var/log/httpd/dostizone.com-combined.log 180.76.5.143 - - [14/Nov/2011:02:30:16 +0530] "GET /blogs/10248/209403/nfl-panties-since-the-quality-of HTTP/1.1" 403 2298 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" 180.76.5.88 - - [14/Nov/2011:02:30:31 +0530] "GET /blogs/815/158725/new-jersey-attorney-search HTTP/1.1" 403 2290 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" 220.181.108.186 - - [14/Nov/2011:02:30:32 +0530] "GET / HTTP/1.1" 403 5043 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" crawl-66-249-67-137.googlebot.com - - [14/Nov/2011:02:30:20 +0530] "GET /blogs/805/11279/supra-suprano-high-shoes HTTP/1.1" 200 30642 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:37 +0530] "GET /blogs/10514/215084/oakland-raiders-sweatpants-tags HTTP/1.1" 403 2297 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 220.181.94.237 - - [14/Nov/2011:02:30:12 +0530] "GET /profile/8509 HTTP/1.1" 200 236894 "-" "Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)" 220.181.94.237 - - [14/Nov/2011:02:30:43 +0530] "GET /mode-switch?return_url=%2Fblogs%2F8529%2F160217%2Fclimate-jordan-6 HTTP/1.1" 302 1 "-" "Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:44 +0530] "GET /blogs/390/61573/blackhawk-jerseys-from-the-you HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" 124.115.0.159 - - [14/Nov/2011:02:30:24 +0530] "GET /blogs/693/46081/application/modules/Hecore/externals/scripts/core.js HTTP/1.1" 200 26869 "http://dostizone.com/blogs/693/46081/thomas-sabo-charms-hot-chilli" "Sosospider+(+http://help.soso.com/webspider.htm)" 124.115.0.159 - - [14/Nov/2011:02:30:24 +0530] "GET /blogs/693/46081/application/modules/Activity/externals/scripts/core.js HTTP/1.1" 200 26873 "http://dostizone.com/blogs/693/46081/thomas-sabo-charms-hot-chilli" "Sosospider+(+http://help.soso.com/webspider.htm)" 124.115.0.159 - - [14/Nov/2011:02:30:24 +0530] "GET /blogs/693/46081/application/modules/Hecore/externals/scripts/imagezoom/core.js HTTP/1.1" 200 26899 "http://dostizone.com/blogs/693/46081/thomas-sabo-charms-hot-chilli" "Sosospider+(+http://help.soso.com/webspider.htm)" 180.76.5.153 - - [14/Nov/2011:02:30:50 +0530] "GET /blogs/10252/212268/cleveland-browns-authentic-jerse HTTP/1.1" 403 2298 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:51 +0530] "GET /blogs/741/46260/chocolate-ugg-women-boots-1873 HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" 124.115.1.7 - - [14/Nov/2011:02:30:40 +0530] "GET /blogs/682/97454/swarovski-jewellry-sale-articles HTTP/1.1" 200 25770 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:56 +0530] "GET /blogs/779/60941/players-a-to-z-michael-cuddyer HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:31:01 +0530] "GET /blogs/469/58551/chicago-bears-news-there-exist HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" 220.181.94.237 - - [14/Nov/2011:02:30:54 +0530] "GET /blogs/8529/160217/climate-jordan-6 HTTP/1.1" 200 30750 "-" "Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)" 180.76.5.59 - - [14/Nov/2011:02:31:05 +0530] "GET /blogs/815/158197/cheap-calgary-flames-jerseys HTTP/1.1" 403 2292 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:31:06 +0530] "GET /mode-switch?return_url=%2Fblogs%2F387%2F45679%2Fhandbag-louis-vuitton-judy-mm-m4 HTTP/1.1" 403 2258 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" crawl-66-249-67-137.googlebot.com - - [14/Nov/2011:02:31:10 +0530] "GET /public/temporary/c83b731ecc556d7fd1a7732d9ac16ed6.png HTTP/1.1" 404 2305 "-" "Googlebot-Image/1

    Read the article

  • Wordpress blog with Joomla?

    - by user427902
    Hi, I had this Wordpress installation which was installed in a subfolder (not root). Like http: //server/blog/. Now, I installed Joomla on the root (http: //server/). Everything seems to be working fine with the Joomla part. However, the blog part is messed up. If I try to browse the homepage of my blog which is http: //server/blog/ it works like a charm. But while trying to view individual blog pages like say, http: //server/blog/some_category/some_post I get a Joomla 404 page. So, I was wondering if it was possible to use both Wordpress and Joomla in the same server in the setup I am trying to. Let me clarify that I am NOT looking to integrate user login and other such things. I just want the blog to be functional under a subfolder while I run the Joomla site in the root. So, what is the correct way to go about it. Can this be solved by any .config edits or something else? Edit: Here's the .htaccess for Joomla ... (I can't find any .htaccess for Wp though, still looking for it.) ## # @version $Id: htaccess.txt 14401 2010-01-26 14:10:00Z louis $ # @package Joomla # @copyright Copyright (C) 2005 - 2010 Open Source Matters. All rights reserved. # @license http://www.gnu.org/copyleft/gpl.html GNU/GPL # Joomla! is Free Software ## ##################################################### # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. # ##################################################### ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks # # mod_rewrite in use RewriteEngine On ########## Begin - Rewrite rules to block out some common exploits ## If you experience problems on your site block out the operations listed below ## This attempts to block the most common type of exploit `attempts` to Joomla! # ## Deny access to extension xml files (uncomment out to activate) #<Files ~ "\.xml$"> #Order allow,deny #Deny from all #Satisfy all #</Files> ## End of deny access to extension xml files RewriteCond %{QUERY_STRING} mosConfig_[a-zA-Z_]{1,21}(=|\%3D) [OR] # Block out any script trying to base64_encode crap to send via URL RewriteCond %{QUERY_STRING} base64_encode.*\(.*\) [OR] # Block out any script that includes a <script> tag in URL RewriteCond %{QUERY_STRING} (\<|%3C).*script.*(\>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Send all blocked request to homepage with 403 Forbidden error! RewriteRule ^(.*)$ index.php [F,L] # ########## End - Rewrite rules to block out some common exploits # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root) # RewriteBase / ########## Begin - Joomla! core SEF Section # RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^/index.php RewriteCond %{REQUEST_URI} (/|\.php|\.html|\.htm|\.feed|\.pdf|\.raw|/[^.]*)$ [NC] RewriteRule (.*) index.php RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L] # ########## End - Joomla! core SEF Section

    Read the article

  • How I understood monads, part 1/2: sleepless and self-loathing in Seattle

    - by Bertrand Le Roy
    For some time now, I had been noticing some interest for monads, mostly in the form of unintelligible (to me) blog posts and comments saying “oh, yeah, that’s a monad” about random stuff as if it were absolutely obvious and if I didn’t know what they were talking about, I was probably an uneducated idiot, ignorant about the simplest and most fundamental concepts of functional programming. Fair enough, I am pretty much exactly that. Being the kind of guy who can spend eight years in college just to understand a few interesting concepts about the universe, I had to check it out and try to understand monads so that I too can say “oh, yeah, that’s a monad”. Man, was I hit hard in the face with the limitations of my own abstract thinking abilities. All the articles I could find about the subject seemed to be vaguely understandable at first but very quickly overloaded the very few concept slots I have available in my brain. They also seemed to be consistently using arcane notation that I was entirely unfamiliar with. It finally all clicked together one Friday afternoon during the team’s beer symposium when Louis was patient enough to break it down for me in a language I could understand (C#). I don’t know if being intoxicated helped. Feel free to read this with or without a drink in hand. So here it is in a nutshell: a monad allows you to manipulate stuff in interesting ways. Oh, OK, you might say. Yeah. Exactly. Let’s start with a trivial case: public static class Trivial { public static TResult Execute<T, TResult>( this T argument, Func<T, TResult> operation) { return operation(argument); } } This is not a monad. I removed most concepts here to start with something very simple. There is only one concept here: the idea of executing an operation on an object. This is of course trivial and it would actually be simpler to just apply that operation directly on the object. But please bear with me, this is our first baby step. Here’s how you use that thing: "some string" .Execute(s => s + " processed by trivial proto-monad.") .Execute(s => s + " And it's chainable!"); What we’re doing here is analogous to having an assembly chain in a factory: you can feed it raw material (the string here) and a number of machines that each implement a step in the manufacturing process and you can start building stuff. The Trivial class here represents the empty assembly chain, the conveyor belt if you will, but it doesn’t care what kind of raw material gets in, what gets out or what each machine is doing. It is pure process. A real monad will need a couple of additional concepts. Let’s say the conveyor belt needs the material to be processed to be contained in standardized boxes, just so that it can safely and efficiently be transported from machine to machine or so that tracking information can be attached to it. Each machine knows how to treat raw material or partly processed material, but it doesn’t know how to treat the boxes so the conveyor belt will have to extract the material from the box before feeding it into each machine, and it will have to box it back afterwards. This conveyor belt with boxes is essentially what a monad is. It has one method to box stuff, one to extract stuff from its box and one to feed stuff into a machine. So let’s reformulate the previous example but this time with the boxes, which will do nothing for the moment except containing stuff. public class Identity<T> { public Identity(T value) { Value = value; } public T Value { get; private set;} public static Identity<T> Unit(T value) { return new Identity<T>(value); } public static Identity<U> Bind<U>( Identity<T> argument, Func<T, Identity<U>> operation) { return operation(argument.Value); } } Now this is a true to the definition Monad, including the weird naming of the methods. It is the simplest monad, called the identity monad and of course it does nothing useful. Here’s how you use it: Identity<string>.Bind( Identity<string>.Unit("some string"), s => Identity<string>.Unit( s + " was processed by identity monad.")).Value That of course is seriously ugly. Note that the operation is responsible for re-boxing its result. That is a part of strict monads that I don’t quite get and I’ll take the liberty to lift that strange constraint in the next examples. To make this more readable and easier to use, let’s build a few extension methods: public static class IdentityExtensions { public static Identity<T> ToIdentity<T>(this T value) { return new Identity<T>(value); } public static Identity<U> Bind<T, U>( this Identity<T> argument, Func<T, U> operation) { return operation(argument.Value).ToIdentity(); } } With those, we can rewrite our code as follows: "some string".ToIdentity() .Bind(s => s + " was processed by monad extensions.") .Bind(s => s + " And it's chainable...") .Value; This is considerably simpler but still retains the qualities of a monad. But it is still pointless. Let’s look at a more useful example, the state monad, which is basically a monad where the boxes have a label. It’s useful to perform operations on arbitrary objects that have been enriched with an attached state object. public class Stateful<TValue, TState> { public Stateful(TValue value, TState state) { Value = value; State = state; } public TValue Value { get; private set; } public TState State { get; set; } } public static class StateExtensions { public static Stateful<TValue, TState> ToStateful<TValue, TState>( this TValue value, TState state) { return new Stateful<TValue, TState>(value, state); } public static Stateful<TResult, TState> Execute<TValue, TState, TResult>( this Stateful<TValue, TState> argument, Func<TValue, TResult> operation) { return operation(argument.Value) .ToStateful(argument.State); } } You can get a stateful version of any object by calling the ToStateful extension method, passing the state object in. You can then execute ordinary operations on the values while retaining the state: var statefulInt = 3.ToStateful("This is the state"); var processedStatefulInt = statefulInt .Execute(i => ++i) .Execute(i => i * 10) .Execute(i => i + 2); Console.WriteLine("Value: {0}; state: {1}", processedStatefulInt.Value, processedStatefulInt.State); This monad differs from the identity by enriching the boxes. There is another way to give value to the monad, which is to enrich the processing. An example of that is the writer monad, which can be typically used to log the operations that are being performed by the monad. Of course, the richest monads enrich both the boxes and the processing. That’s all for today. I hope with this you won’t have to go through the same process that I did to understand monads and that you haven’t gone into concept overload like I did. Next time, we’ll examine some examples that you already know but we will shine the monadic light, hopefully illuminating them in a whole new way. Realizing that this pattern is actually in many places but mostly unnoticed is what will enable the truly casual “oh, yes, that’s a monad” comments. Here’s the code for this article: http://weblogs.asp.net/blogs/bleroy/Samples/Monads.zip The Wikipedia article on monads: http://en.wikipedia.org/wiki/Monads_in_functional_programming This article was invaluable for me in understanding how to express the canonical monads in C# (interesting Linq stuff in there): http://blogs.msdn.com/b/wesdyer/archive/2008/01/11/the-marvels-of-monads.aspx

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience

    - by pinaldave
    I recently attended most prestigious SQL Server event SQLPASS between Nov 8-11, 2010 at Seattle. I have only one expression for the event - Best Summit Ever This year the summit was at its best. Instead of writing about my usual routine or the event, I am going to write about the interesting things I did and how I felt about it! Best Summit Ever Trip to Seattle! This was my second trip to Seattle this year and the journey is always long. Here is the travel stats on how long it takes to get to Seattle: 24 hours official air time 36 hours total travel time (connection waits and airport commute) Every time I travel to USA I gain a day and when I travel back to home, I lose a day. However, the total traveling time is around 3 days. The journey is long and very exhausting. However, it is all worth it when you’re attending an event like SQLPASS. Here are few things I carry when I travel for a long journey: Dry Snack packs – I like to have some good Indian Dry Snacks along with me in my backpack so I can have my own snack when I want Amazon Kindle – Loaded with 80+ books A physical book – This is usually a very easy to read book I do not watch movies on the plane and usually spend my time reading something quick and easy. If I can go to sleep, I go for it. I prefer to not to spend time in conversation with the guy sitting next to me because usually I end up listening to their biography, which I cannot blog about. Sheraton Seattle SQLPASS In any case, I love to go to Seattle as the city is great and has everything a brilliant metropolis has to offer. The new Light Train is extremely convenient, and I can take it directly from the airport to the city center. My hotel, the Sheraton, was only few meters (in the USA people count in blocks – 3 blocks) away from the train station. This time I saved USD 40 each round trip due to the Light Train. Sessions I attended! Well, I really wanted to attend most of the sessions but there was great dilemma of which ones to choose. There were many, many sessions to be attended and at any given time there was more than one good session being presented. I had decided to attend sessions in area performance tuning and I attended quite a few sessions this year, compared to what I was able to do last year. Here are few names of the speakers whose sessions I attended (please note, following great speakers are not listed in any order. I loved them and I enjoyed their sessions): Conor Cunningham Rushabh Mehta Buck Woody Brent Ozar Jonathan Kehayias Chris Leonard Bob Ward Grant Fritchey I had great fun attending their sessions. The sessions were meaningful and enlightening. It is hard to rate any session but I have found that the insights learned in Conor Cunningham’s sessions are the highlight of the PASS Summit. Rushabh Mehta at Keynote SQLPASS   Bucky Woody and Brent Ozar I always like the sessions where the speaker is much closer to the audience and has real world experience. I think speakers who have worked in the real world deliver the best content and most useful information. Sessions I did not like! Indeed there were few sessions I did not like it and I am not going to name them here. However, there were strong reasons I did not like their sessions, and here is why: Sessions were all theory and had no real world connections. All technical questions ended with confusing answers (lots of “I will get back to you on it,” “it depends,” “let us take this offline” and many more…) “I am God” kind of attitude in the speakers For example, I attended a session of one very well known speaker who is a specialist for one particular area. I was bit late for the session and was surprised to see that in a room that could hold 350 people there were only 30 attendees. After sitting there for 15 minutes, I realized why lots of people left. Very soon I found I preferred to stare out the window instead of listening to that particular speaker. One on One Talk! Many times people ask me what I really like about PASS. I always say the experience of meeting SQL legends and spending time with them one on one and LEARNING! Here is the quick list of the people I met during this event and spent more than 30 minutes with each of them talking about various subjects: Pinal Dave and Brad Shulz Pinal Dave and Rushabh Mehta Michael Coles and Pinal Dave Rushabh Mehta – It is always pleasure to meet with him. He is a man with lots of energy and a passion for community. He recently told me that he really wanted to turn PASS into resource for learning for every SQL Server Developer and Administrator in the world. I had great in-depth discussion regarding how a single person can contribute to a community. Michael Coles – I consider him my best friend. It is always fun to meet him. He is funny and very knowledgeable. I think there are very few people who are as expert as he is in encryption and spatial databases. Worth meeting him every single time. Glenn Berry – A real friend of everybody. He is very a simple person and very true to his heart. I think there is not a single person in whole community who does not like him. He is a friends of all and everybody likes him very much. I once again had time to sit with him and learn so much from him. As he is known as Dr. DMV, I can be his nurse in the area of DMV. Brad Schulz – I always wanted to meet him but never got chance until today. I had great time meeting him in person and we have spent considerable amount of time together discussing various T-SQL tricks and tips. I do not know where he comes up with all the different ideas but I enjoy reading his blog and sharing his wisdom with me. Jonathan Kehayias – He is drill sergeant in US army. If you get the impression that he is a giant with very strong personality – you are wrong. He is very kind and soft spoken DBA with strong performance tuning skills. I asked him how he has kept his two jobs separate and I got very good answer – just work hard and have passion for what you do. I attended his sessions and his presentation style is very unique.  I feel like he is speaking in a language I understand. Louis Davidson – I had never had a chance to sit with him and talk about technology before. He has so much wisdom and he is very kind. During the dinner, I had talked with him for long time and without hesitation he started to draw a schema for me on the menu. It was a wonderful experience to learn from a master at the dinner table. He explained to me the real and practical differences between third normal form and forth normal form. Honestly I did not know earlier, but now I do. Erland Sommarskog – This man needs no introduction, he is very well known and very clear in conveying his ideas. I learned a lot from him during the course of year. Every time I meet him, I learn something new and this time was no exception. Joe Webb – Joey is all about community and people, we had interesting conversation about community, MVP and how one can be helpful to community without losing passion for long time. It is always pleasant to talk to him and of course, I had fun time. Ross Mistry – I call him my brother many times because he indeed looks like my cousin. He provided me lots of insight of how one can write book and how he keeps his books simple to appeal to all the readers. A wonderful person and great friend. Ola Hallgren - I did not know he was coming to the summit. I had great time meeting him and had a wonderful conversation with him regarding his scripts and future community activities. Blythe Morrow – She used to be integrated part of SQL Server Community and PASS HQ. It was wonderful to meet her again and re-connect. She is wonderful person and I had a great time talking to her. Solid Quality Mentors – It is difficult to decide who to mention here. Instead of writing all the names, I am going to include a photo of our meeting. I had great fun meeting various members of our global branches. This year I was sitting with my Spanish speaking friends and had great fun as Javier Loria from Solid Quality translated lots of things for me. Party, Party and Parties Every evening there were various parties. I did attend almost all of them. Every party had different theme but the goal of all the parties the same – networking. Here are the few parties where I had lots of fun: Dell Reception Party Exhibitor Party Solid Quality Fun Party Red Gate Friends Party MVP Dinner Microsoft Party MVP Dinner Quest Party Gameworks PASS Party Volunteer Party at Garage Solid Quality Mentors (10 Members out of 120) They were all great networking opportunities and lots of fun. I really had great time meeting people at the various parties. There were few people everywhere – well, I will say I am among them – who hopped parties. NDA – Not Decided Agenda During the event there were few meetings marked “NDA.” Someone asked me “why are these things NDA?”  My response was simple: because they are not sure themselves. NDA stands for Not Decided Agenda. Toys, Giveaways and Luggage I admit, I was like child in Gameworks and was playing to win soft toys. I was doing it for my daughter. I must thank all of the people who gave me their cards to try my luck. I won 4 soft-toys for my daughter and it was fun. Also, thanks to Angel who did a final toy swap with me to get the desired toy for my daughter. I also collected ducks from Idera, as my daughter really loves them. Solid Quality Booth Each of the exhibitors was giving away something and I got so much stuff that my luggage got quite a bit bigger when I returned. Best Exhibitor Idera had SQLDoctor (a real magician and fun guy) to promote their new tool SQLDoctor. I really had a great time participating in the magic myself. At one point, the magician made my watch disappear.  I have seen better magic before, but this time it caught me unexpectedly and I was taken by surprise. I won many ducks again. The Common Question I heard the following common questions: I have seen you somewhere – who are you? – I am Pinal Dave. I did not know that Pinal is your first name and Dave is your last name, how do you pronounce your last name again? – Da-way How old are you? – I am as old as I can be. Are you an Indian because you look like one? – I did not answer this one. Where are you from? This question was usually asked after looking at my badge which says India. So did you really fly from India? – Yes, because I have seasickness so I do not prefer the sea journey. How long was the journey? – 24/36/12 (air travel time/total travel time/time zone difference) Why do you write on SQLAuthority.com? – Because I want to. I remember your daughter looks like you. – Is this even a question? Of course, she is daddy’s little girl. There were so many other questions, I will have to write another blog post about it. SQLPASS Again, Best Summit Ever! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: SQLPASS

    Read the article

  • Setting up a new Silverlight 4 Project with WCF RIA Services

    - by Kevin Grossnicklaus
    Many of my clients are actively using Silverlight 4 and RIA Services to build powerful line of business applications.  Getting things set up correctly is critical to being to being able to take full advantage of the RIA services plumbing and when developers struggle with the setup they tend to shy away from the solution as a whole.  I’m a big proponent of RIA services and wanted to take the opportunity to share some of my experiences in setting up these types of projects.  In late 2010 I presented a RIA Services Master Class here in St. Louis, MO through my firm (ArchitectNow) and the information shared in this post was promised during that presentation. One other thing I want to mention before diving in is the existence of a number of other great posts on this subject.  I’ve learned a lot from many of them and wanted to call out a few of them.  The purpose of my post is to point out some of the gotchas that people get caught up on in the process but I would still encourage you to do as much additional research as you can to find the perfect setup for your needs. Here are a few additional blog posts and articles you should check out on the subject: http://msdn.microsoft.com/en-us/library/ee707351(VS.91).aspx http://adam-thompson.com/post/2010/07/03/Getting-Started-with-WCF-RIA-Services-for-Silverlight-4.aspx Technologies I don’t intend for this post to turn into a full WCF RIA Services tutorial but I did want to point out what technologies we will be using: Visual Studio.NET 2010 Silverlight 4.0 WCF RIA Services for Visual Studio 2010 Entity Framework 4.0 I also wanted to point out that the screenshots came from my personal development box which has a number of additional plug-ins and frameworks loaded so a few of the screenshots might not match 100% with what you see on your own machines. If you do not have Visual Studio 2010 you can download the express version from http://www.microsoft.com/express.  The Silverlight 4.0 tools and the WCF RIA Services components are installed via the Web Platform Installer (http://www.microsoft.com/web/download). Also, the examples given in this post are done in C#…sorry to you VB folks but the concepts are 100% identical. Setting up anew RIA Services Project This section will provide a step-by-step walkthrough of setting up a new RIA services project using a shared DLL for server side code and a simple Entity Framework model for data access.  All projects are created with the consistent ArchitectNow.RIAServices filename prefix and default namespace.  This would be modified to match your companies standards. First, open Visual Studio and open the new project window via File->New->Project.  In the New Project window, select the Silverlight folder in the Installed Templates section on the left and select “Silverlight Application” as your project type.  Verify your solution name and location are set appropriately.  Note that the project name we specified in the example below ends with .Client.  This indicates the name which will be given to our Silverlight project. I consider Silverlight a client-side technology and thus use this name to reflect that.  Click Ok to continue. During the creation on a new Silverlight 4 project you will be prompted with the following dialog to create a new web ASP.NET web project to host your Silverlight content.  As we are demonstrating the setup of a WCF RIA Services infrastructure, make sure the “Enable WCF RIA Services” option is checked and click OK.  Obviously, there are some other options here which have an effect on your solution and you are welcome to look around.  For our example we are going to leave the ASP.NET Web Application Project selected.  If you are interested in having your Silverlight project hosted in an MVC 2 application or a Web Site project these options are available as well.  Also, whichever web project type you select, the name can be modified here as well.  Note that it defaults to the same name as your Silverlight project with the addition of a .Web suffix. At this point, your full Silverlight 4 project and host ASP.NET Web Application should be created and will now display in your Visual Studio solution explorer as part of a single Visual Studio solution as follows: Now we want to add our WCF RIA Services projects to this same solution.  To do so, right-click on the Solution node in the solution explorer and select Add->New Project.  In the New Project dialog again select the Silverlight folder under the Visual C# node on the left and, in the main area of the screen, select the WCF RIA Services Class Library project template as shown below.  Make sure your project name is set appropriately as well.  For the sample below, we will name the project “ArchitectNow.RIAServices.Server.Entities”.   The .Server.Entities suffix we use is meant to simply indicate that this particular project will contain our WCF RIA Services entity classes (as you will see below).  Click OK to continue. Once you have created the WCF RIA Services Class Library specified above, Visual Studio will automatically add TWO projects to your solution.  The first will be an project called .Server.Entities (using our naming conventions) and the other will have the same name with a .Web extension.  The full solution (with all 4 projects) is shown in the image below.  The .Entities project will essentially remain empty and is actually a Silverlight 4 class library that will contain generated RIA Services domain objects.  It will be referenced by our front-end Silverlight project and thus allow for simplified sharing of code between the client and the server.   The .Entities.Web project is a .NET 4.0 class library into which we will put our data access code (via Entity Framework).  This is our server side code and business logic and the RIA Services plumbing will maintain a link between this project and the front end.  Specific entities such as our domain objects and other code we set to be shared will be copied automatically into the .Entities project to be used in both the front end and the back end. At this point, we want to do a little cleanup of the projects in our solution and we will do so by deleting the “Class1.cs” class from both the .Entities project and the .Entities.Web project.  (Has anyone ever intentionally named a class “Class1”?) Next, we need to configure a few references to make RIA Services work.  THIS IS A KEY STEP THAT CAUSES MANY HEADACHES FOR DEVELOPERS NEW TO THIS INFRASTRUCTURE! Using the Add References dialog in Visual Studio, add a project reference from the *.Client project (our Silverlight 4 client) to the *.Entities project (our RIA Services class library).  Next, again using the Add References dialog in Visual Studio, add a project reference from the *.Client.Web project (our ASP.NET host project) to the *.Entities.Web project (our back-end data services DLL).  To get to the Add References dialog, simply right-click on the project you with to add a reference to in the Visual Studio solution explorer and select “Add Reference” from the resulting context menu.  You will want to make sure these references are added as “Project” references to simplify your future debugging.  To reiterate the reference direction using the project names we have utilized in this example thus far:  .Client references .Entities and .Client.Web reference .Entities.Web.  If you have opted for a different naming convention, then the Silverlight project must reference the RIA Services Silverlight class library and the ASP.NET host project must reference the server-side class library. Next, we are going to add a new Entity Framework data model to our data services project (.Entities.Web).  We will do this by right clicking on this project (ArchitectNow.Server.Entities.Web in the above diagram) and selecting Add->New Project.  In the New Project dialog we will select ADO.NET Entity Data Model as in the following diagram.  For now we will call this simply SampleDataModel.edmx and click OK. It is worth pointing out that WCF RIA Services is in no way tied to the Entity Framework as a means of accessing data and any data access technology is supported (as long as the server side implementation maps to the RIA Services pattern which is a topic beyond the scope of this post).  We are using EF to quickly demonstrate the RIA Services concepts and setup infrastructure, as such, I am not providing a database schema with this post but am instead connecting to a small sample database on my local machine.  The following diagram shows a simple EF Data Model with two tables that I reverse engineered from a local data store.   If you are putting together your own solution, feel free to reverse engineer a few tables from any local database to which you have access. At this point, once you have an EF data model generated as an EDMX into your .Entites.Web project YOU MUST BUILD YOUR SOLUTION.  I know it seems strange to call that out but it important that the solution be built at this point for the next step to be successful.  Obviously, if you have any build errors, these must be addressed at this point. At this point we will add a RIA Services Domain Service to our .Entities.Web project (our server side code).  We will need to right-click on the .Entities.Web project and select Add->New Item.  In the Add New Item dialog, select Domain Service Class and verify the name of your new Domain Service is correct (ours is called SampleService.cs in the image below).  Next, click "Add”. After clicking “Add” to include the Domain Service Class in the selected project, you will be presented with the following dialog.  In it, you can choose which entities from the selected EDMX to include in your services and if they should be allowed to be edited (i.e. inserted, updated, or deleted) via this service.  If the “Available DataContext/ObjectContext classes” dropdown is empty, this indicates you have not yes successfully built your project after adding your EDMX.  I would also recommend verifying that the “Generate associated classes for metadata” option is selected.  Once you have selected the appropriate options, click “OK”. Once you have added the domain service class to the .Entities.Web project, the resulting solution should look similar to the following: Note that in the solution you now have a SampleDataModel.edmx which represents your EF data mapping to your database and a SampleService.cs which will contain a large amount of generated RIA Services code which RIA Services utilizes to access this data from the Silverlight front-end.  You will put all your server side data access code and logic into the SampleService.cs class.  The SampleService.metadata.cs class is for decorating the generated domain objects with attributes from the System.ComponentModel.DataAnnotations namespace for validation purposes. FINAL AND KEY CONFIGURATION STEP!  One key step that causes significant headache to developers configuring RIA Services for the first time is the fact that, when we added the EDMX to the .Entities.Web project for our EF data access, a connection string was generated and placed within a newly generated App.Context file within that project.  While we didn’t point it out at the time you can see it in the image above.  This connection string will be required for the EF data model to successfully locate it’s data.  Also, when we added the Domain Service class to the .Entities.Web project, a number of RIA Services configuration options were added to the same App.Config file.   Unfortunately, when we ultimately begin to utilize the RIA Services infrastructure, our Silverlight UI will be making RIA services calls through the ASP.NET host project (i.e. .Client.Web).  This host project has a reference to the .Entities.Web project which actually contains the code so all will pass through correctly EXCEPT the fact that the host project will utilize it’s own Web.Config for any configuration settings.  For this reason we must now merge all the sections of the App.Config file in the .Entities.Web project into the Web.Config file in the .Client.Web project.  I know this is a bit tedious and I wish there were a simpler solution but it is required for our RIA Services Domain Service to be made available to the front end Silverlight project.  Much of this manual merge can be achieved by simply cutting and pasting from App.Config into Web.Config.  Unfortunately, the <system.webServer> section will exist in both and the contents of this section will need to be manually merged.  Fortunately, this is a step that needs to be taken only once per solution.  As you add additional data structures and Domain Services methods to the server no additional changes will be necessary to the Web.Config. Next Steps At this point, we have walked through the basic setup of a simple RIA services solution.  Unfortunately, there is still a lot to know about RIA services and we have not even begun to take advantage of the plumbing which we just configured (meaning we haven’t even made a single RIA services call).  I plan on posting a few more introductory posts over the next few weeks to take us to this step.  If you have any questions on the content in this post feel free to reach out to me via this Blog and I’ll gladly point you in (hopefully) the right direction. Resources Prior to closing out this post, I wanted to share a number or resources to help you get started with RIA services.  While I plan on posting more on the subject, I didn’t invent any of this stuff and wanted to give credit to the following areas for helping me put a lot of these pieces into place.   The books and online resources below will go a long way to making you extremely productive with RIA services in the shortest time possible.  The only thing required of you is the dedication to take advantage of the resources available. Books Pro Business Applications with Silverlight 4 http://www.amazon.com/Pro-Business-Applications-Silverlight-4/dp/1430272074/ref=sr_1_2?ie=UTF8&qid=1291048751&sr=8-2 Silverlight 4 in Action http://www.amazon.com/Silverlight-4-Action-Pete-Brown/dp/1935182374/ref=sr_1_1?ie=UTF8&qid=1291048751&sr=8-1 Pro Silverlight for the Enterprise (Books for Professionals by Professionals) http://www.amazon.com/Pro-Silverlight-Enterprise-Books-Professionals/dp/1430218673/ref=sr_1_3?ie=UTF8&qid=1291048751&sr=8-3 Web Content RIA Services http://channel9.msdn.com/Blogs/RobBagby/NET-RIA-Services-in-5-Minutes http://silverlight.net/riaservices/ http://www.silverlight.net/learn/videos/all/net-ria-services-intro/ http://www.silverlight.net/learn/videos/all/ria-services-support-visual-studio-2010/ http://channel9.msdn.com/learn/courses/Silverlight4/SL4BusinessModule2/SL4LOB_02_01_RIAServices http://www.myvbprof.com/MainSite/index.aspx#/zSL4_RIA_01 http://channel9.msdn.com/blogs/egibson/silverlight-firestarter-ria-services http://msdn.microsoft.com/en-us/library/ee707336%28v=VS.91%29.aspx Silverlight www.silverlight.net http://msdn.microsoft.com/en-us/silverlight4trainingcourse.aspx http://channel9.msdn.com/shows/silverlighttv

    Read the article

  • CodePlex Daily Summary for Thursday, February 17, 2011

    CodePlex Daily Summary for Thursday, February 17, 2011Popular ReleasesMarr DataMapper: Marr Datamapper 2.7 beta: - Changed QueryToGraph relationship rules: 1) Any parent entity (entity with children) must have at least one PK specified or an exception will be thrown 2) All 1-M relationship entities must have at least one PK specified or an exception will be thrown Only 1-1 entities with no children are allowed to have 0 PKs specified. - fixed AutoQueryToGraph bug (columns in graph children were being included in the select statement)datajs - JavaScript Library for data-centric web applications: datajs version 0.0.2: This release adds support for parsing DateTime and DateTimeOffset properties into javascript Date objects and serialize them back.thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.11): Features Added a new option that allows properties on data contract types to be marked as virtual. Bug Fixes Fixed a bug caused by certain project properties not being available on Web Service Software Factory projects. Fixed a bug that could result in the WrapperName value of the MessageContractAttribute being incorrect when the Adjust Casing option is used. The menu item code now caters for CommandBar instances that are not available. For example the Web Item CommandBar does not exist ...Document.Editor: 2011.5: Whats new for Document.Editor 2011.5: New export to email New export to image New document background color Improved Tooltips Minor Bug Fix's, improvements and speed upsTerminals: Version 2 - RC1: The "Clean Install" will overwrite your log4net configuration (if you have one). If you run in a Portable Environment, you can use the "Clean Install" and target your portable folder. Tested and it works fine. Changes for this release: Re-worked on the Toolstip settings are done, just to avoid the vs.net clash with auto-generating files for .settings files. renamed it to .settings.config Packged both log4net and ToolStripSettings files into the installer Upgraded the version inform...Export Test Cases From TFS: Test Case Export to Excel 1.0: Team Foundation Server (TFS) 2010 enables the users to manage test cases as Work Item(s). The complete description of the test case along with steps can be managed as single Work Item in TFS 2010. Before migrating to TFS 2010 many test teams will be using MS Excel to manage the test cases (or test scripts). However, after migrating to TFS 2010, test teams can manage the test cases in the server but there may be need to get the test cases into excel sheet like approval from Business Analysts ...WriteableBitmapEx: WriteableBitmapEx 0.9.7.0: Fixed many bugs. Added the Rotate method which rotates the bitmap in 90° steps clockwise and returns a new rotated WriteableBitmap. Added a Flip method with support for FlipMode.Vertical and FlipMode.Horizontal. Added a new Filter extension file with a convolution method and some kernel templates (Gaussian, Sharpen). Added the GetBrightness method, which returns the brightness / luminance of the pixel at the x, y coordinate as byte. Added the ColorKeying BlendMode. Added boundary ...AllNewsManager.NET: AllNewsManager.NET 1.3: AllNewsManager.NET 1.3. This new version provide several new features, improvements and bug fixes. Some new features: Online Users. Avatars. Copy function (to create a new article from another one). SEO improvements (friendly urls). New admin buttons. And more...Facebook Graph Toolkit: Facebook Graph Toolkit 0.8: Version 0.8 (15 Feb 2011)moved to Beta stage publish photo feature "email" field of User object added new Graph Api object: Group, Event new Graph Api connection: likes, groups, eventsDJME - The jQuery extensions for ASP.NET MVC: DJME2 -The jQuery extensions for ASP.NET MVC beta2: The source code and runtime library for DJME2. For more product info you can goto http://www.dotnetage.com/djme.html What is new ?The Grid extension added The ModelBinder added which helping you create Bindable data Action. The DnaFor() control factory added that enabled Model bindable extensions. Enhance the ListBox , ComboBox data binding.Jint - Javascript Interpreter for .NET: Jint - 0.9.0: New CLR interoperability features Many bugfixesBuild Version Increment Add-In Visual Studio: Build Version Increment v2.4.11046.2045: v2.4.11046.2045 Fixes and/or Improvements:Major: Added complete support for VC projects including .vcxproj & .vcproj. All padding issues fixed. A project's assembly versions are only changed if the project has been modified. Minor Order of versioning style values is now according to their respective positions in the attributes i.e. Major, Minor, Build, Revision. Fixed issue with global variable storage with some projects. Fixed issue where if a project item's file does not exist, a ...Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.1: Coding4Fun.Phone.Toolkit v1.1 release. Bug fixes and minor feature requests addedTV4Home - The all-in-one TV solution!: 0.1.0.0 Preview: This is the beta preview release of the TV4Home software.Finestra Virtual Desktops: 1.2: Fixes a few minor issues with 1.1 including the broken per-desktop backgrounds Further improves the speed of switching desktops A few UI performance improvements Added donations linksNuGet: NuGet 1.1: NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. The URL to the package OData feed is: http://go.microsoft.com/fwlink/?LinkID=206669 To see the list of issues fixed in this release, visit this our issues listEnhSim: EnhSim 2.4.0: 2.4.0This release supports WoW patch 4.06 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 Changes since 2.3.0 - Upd...Sterling Isolated Storage Database with LINQ for Silverlight and Windows Phone 7: Sterling OODB v1.0: Note: use this changeset to download the source example that has been extended to show database generation, backup, and restore in the desktop example. Welcome to the Sterling 1.0 RTM. This version is not backwards-compatible with previous versions of Sterling. Sterling is also available via NuGet. This product has been used and tested in many applications and contains a full suite of unit tests. You can refer to the User's Guide for complete documentation, and use the unit tests as guide...PDF Rider: PDF Rider 0.5.1: Changes from the previous version * Use dynamic layout to better fit text in other languages * Includes French and Spanish localizations Prerequisites * Microsoft Windows Operating Systems (XP - Vista - 7) * Microsoft .NET Framework 3.5 runtime * A PDF rendering software (i.e. Adobe Reader) that can be opened inside Internet Explorer. Installation instructionsChoose one of the following methods: 1. Download and run the "pdfRider0.5.1-setup.exe" (reccomended) 2. Down...Snoop, the WPF Spy Utility: Snoop 2.6.1: This release is a bug fixing release. Most importantly, issues have been seen around WPF 4.0 applications not always showing up in the app chooser. Hopefully, they are fixed now. I thought this issue warranted a minor release since more and more people are going WPF 4.0 and I don't want anyone to have any problems. Dan Hanan also contributes again with several usability features. Thanks Dan! Happy Snooping! p.s. By request, I am also attaching a .zip file ... so that people can install it ...New ProjectsAlpe d'HuZes: This project contains the source for Alpe d'HuZes, an organization that fights the cancer disease, by giving people the chance to ride the Alpe d'Hues, a mountain in france. By climbing this mountain, money is collected which is entirely donated to the "kankerbestrijding".AstroLib: Astronomical libraryDevon: Devon_Projectearthquake predictor: This project is attempt to create earthquake prediction application , which can help save lives. It is based on theory that number of lost pets, before earthquake, growing up. This statistics can be obtained from free news papers, boards, forums... Technology : C#, ASPX, .NET 4FCNS.Calendar: FCNS.Calendar ??? MonoCalendar(?????????) ?.NET??????????,??????Mac???????????????iCal?????。???????????.??????????.?????????????????.????????????????. FlashRelease [O-GO.ru edition]: FlashRelease it's a tool for easy create description of new video\music\game torrent releases. Developed in Delphi.Forms based authentication for SharePoint2010: Forms based authentication Management features for SharePoint 2010. <a href="http://www.softwarediscipline.com/post/2011/01/03/Forms-based-authentication-feature-SharePoint-2010.aspx" alt="SharePoint 2010 FBA management feature">SharePoint 2010 FBA feature</a>ITune.LittleTools: Reads your ITune library and copy your track rating in ITune on to your file in windows.MingleProject: Just a simple project that showcases the power of ASP.NET and Visual StudioMvcContrib UI Extensions - Themed Grid & Menu: UI Extensions to the MvcContrib Project - Themed Grid & MenuNDOS Azure: Windows Azure projects developed at the "Open Source and Interoperability Development Nucleous" (http://ndos.codeplex.com/) at Universidade Federal do Rio Grande do Sul (http://www.ufrgs.br).ObjectDumper: ObjectDumper takes a normal .NET object and dumps it to a string, TextWriter, or file. Handy for debugging purposes.Open Analytic: Open Analytic is an open source business intelligence framework that believes in simplicity. The framework is developed with .NET language and can be easily integrated in custom product development.Pomodoro.Oob Expression Blend Example App: This is a non-functional Silverlight 4 Out-of-Browser app to demonstrate functionality in Expression Blend and to accompany user group talks and presentations on Blend.Senior Design: Uconn senior design project!Service Monitors - A services health monitoring tool: The idea behind this project is simple, I want to know when a service related to my application is not available. Our first intent to get a tool to generete the necessary data to be compliant with the Availability SLA of our systems. SGO: OrganizerSina Weibo QReminder: A handy utility that display remind message in the browser title for sina weibo.System Center Configuration Manager Integration Pack Extention: This integration pack adds some additional integration points for Opalis to System Center Configuration Manager. These functions are used in my User Self imaging workflow that will be demoed at MMS 2011.TwittaFox: TwittaFox ist ein kleiner Twitter-Client welcher direkt aus dem Tray angesprochen werden kann.Ultralight Markup: Ultralight Markup makes it easier for webmasters to allow safe user comments. It features a stripped-down intermediate markup language meant to bridge the gap between text entry and HTML. And the project includes an ASP.NET MVC implementation with a Javascript editor.Unit Conversion Library: Unit Conversion Library is a .Net 2.0 based library, containing static methods for all the Units Set present in Windows 7 calculator. "Angle", "Area", "Energy", "Length", "Power", "Pressure", "Temperature",Time", "Velocity", "Volume", "Weight/Mass".UTB-PFIII-TermProj-Team DeLaFuente, Vasquez, Morales, Dartez: This is the group project for UTB-PFIII Team project. Authors include David De La Fuente, Louis Dartez, Juan Vasquez and Froylan Morales.Version History to InfoPath Custom List Form: The ability to add a button to view the version history of an item when the display form is modified in InfoPath allows a user easy access to view versioning information. Out of the box, SharePoint does not allow this ability. This is a sandboxed solution.WeatherCN - ????: WeatherCN - ????WinformsPOCMVP: This is a simple, and small proof of concept for the Model View Presenter UI design pattern with C# WinForms.worldbestwebsites: Customer Connecting Websites A website development for customer connecting

    Read the article

< Previous Page | 9 10 11 12 13