Search Results

Search found 63372 results on 2535 pages for 'data center'.

Page 100/2535 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Center big image in smaller div

    - by larin555
    I'm trying to align images in the center of a slider div. I'm adjusting FlexSlider css by the way. Here's my CSS code : .flexslider {margin: 0; padding: 0; width: 600px; height:480px; overflow:hidden;margin-left:auto;margin-right:auto;} .flexslider .slides > li {display: none; -webkit-backface-visibility: hidden;} /* Hide the slides before the JS is loaded. Avoids image jumping */ .flexslider .slides img {width:auto;height:100%; display: inline-block; text-align:center;} Everything is working like I want, except that I want wider image to be centered in the div. Right now it is left-aligned. I cannot use background-image by the way. Any ideas? I also tried applying to the .flexslider .slides img : margin-left:-50%...not working margin-left:auto and margin-right:auto...not working left:50% and right:50%...not working either

    Read the article

  • How do I center align horizontal <UL> menu?

    - by Steven
    I need to centre align a horizontal menu. I've tried various solutions, including the mix of inline-block / block / center-align etc., but not being successful. Can someone help me please? :) Here is my code: <div class="topmenu-design"> <!-- Top menu content: START --> <ul id="topmenu firstlevel"> <li class="firstli" id="node_id_64"><div><a href="#"><span>Om kampanjen</span></a></div></li> <li id="node_id_65"><div><a href="#"><span>Fakta om inneklima</span></a></div></li> <li class="lastli" id="node_id_66"><div><a href="#"><span>Statistikk</span></a></div></li> </ul> <!-- Top menu content: END --> </div> UPDATE I know how to center align the UL within the DIV. That can be accomplished using Sarfraz's suggestion. But the list items are still floated left within the UL. Do I smell javascript to accomplish this?

    Read the article

  • How do I center this form in css?

    - by johnny
    I have tried everything. I cannot get this centered on the screen. I am using ie 9 but it does the same in chrome. It just sits on the left of the webpage. Thank you for any help. <style type="text/css"> body { margin:50px 0px; padding:0px; text-align:center; align:center; } label,input { display: block; width: 150px; float: left; margin-bottom: 10px; } label { text-align: right; width: 75px; padding-right: 20px; } br { clear: left; } </style> </head> <body> <form name="Form1" action="mypage.asp" method="get"> <label for="name">Name</label> <input id="name" name="name"><br> <label for="address">Address</label> <input id="address" name="address"><br> <label for="city">City</label> <input id="city" name="city"><br> <input type="submit" name="submit" id="submit" value="submit" class="button" /> </form> </body>

    Read the article

  • How to center a List(ul)

    - by BlackPearl
    I have a ul List that contain some li elements and I float them to left. It currently looks like below; | A | B | C | D | E < space empty| I want it to be | A | B | C | D | E | that is center on the page and contents should be centered too HTML <div class="profile-content"> <ul class="content-btn"> <li> <div class="digits">83</div>Followers </li> <li> <div class="digits">1507</div>Tweets </li> <li> <div class="digits">234</div>Friends </li> <li> <div class="digits">51</div>Likes </li> <li> <div class="digits">42</div>Gits </li> </ul> <div class="clear"></div> </div> CSS .content-btn { width:100%; margin: 0 auto; } .profile-content ul li { float: left; padding: 5px 8px; text-align: center; border-right: 1px solid #eeeeee; border-left: 1px solid #ffffff; } .profile-content ul li .digits { font-weight: bold; font-size: 16px; }

    Read the article

  • How to resolve Unmet dependencies error?

    - by dandelion
    Using my new install of Ubuntu I haven't been able to download anything from the software center except the maryo game without the following error: The following packages have unmet dependencies: vlc: Depends: vlc-nox (= 1.1.12-2~oneiric1) but 1.1.12-2~oneiric1 is to be installed Depends: libaa1 (>= 1.4p5) but 1.4p5-38build1 is to be installed Depends: libavcodec-extra-53 (>= 4:0.7-1) but 4:0.7.3ubuntu0.11.10.1 is to be installed Depends: libavutil-extra-51 (>= 4:0.7-1) but 4:0.7.3ubuntu0.11.10.1 is to be installed Depends: libc6 (>= 2.8) but 2.13-20ubuntu5.1 is to be installed Depends: libfreetype6 (>= 2.2.1) but 2.4.4-2ubuntu1.1 is to be installed Depends: libgcc1 (>= 1:4.1.1) but 1:4.6.1-9ubuntu3 is to be installed Depends: libqtcore4 (>= 4:4.7.0~beta1) but 4:4.7.4-0ubuntu8.1 is to be installed Depends: libqtgui4 (>= 4:4.5.3) but 4:4.7.4-0ubuntu8.1 is to be installed Depends: libsdl-image1.2 (>= 1.2.10) but 1.2.10-2.1 is to be installed Depends: libsdl1.2debian (>= 1.2.10-1) but 1.2.14-6.1ubuntu4 is to be installed Depends: libstdc++6 (>= 4.6) but 4.6.1-9ubuntu3 is to be installed Depends: libva-x11-1 (> 1.0.12~) but it is not going to be installed Depends: libva1 (> 1.0.12~) but it is not going to be installed Depends: libxcb-randr0 (>= 1.1) but it is not going to be installed Depends: libxcb-xv0 (>= 1.2) but it is not going to be installed Depends: zlib1g (>= 1:1.2.3.3.dfsg) but 1:1.2.3.4.dfsg-3ubuntu3 is to be installed My system specs are version 11.10 64 bit. ge-g41m-es2l mother board amd 5770 video card wdc green 500 gig hard drive I have recently changed the motherboard, but otherwise have not changed my computer from when I used to be running the same version of Ubuntu. edit still unable to download output of sudo apt-get update output of sudo apt-get update ~$ sudo apt-get update Ign http://extras.ubuntu.com oneiric InRelease Ign http://security.ubuntu.com oneiric-security InRelease Ign http://archive.canonical.com oneiric InRelease Ign http://ppa.launchpad.net oneiric InRelease Ign http://us.archive.ubuntu.com oneiric InRelease Ign http://us.archive.ubuntu.com oneiric-updates InRelease Ign http://us.archive.ubuntu.com oneiric-backports InRelease Hit http://extras.ubuntu.com oneiric Release.gpg Hit http://archive.canonical.com oneiric Release.gpg Hit http://security.ubuntu.com oneiric-security Release.gpg Hit http://ppa.launchpad.net oneiric Release.gpg Ign http://us.archive.ubuntu.com oneiric-proposed InRelease Hit http://us.archive.ubuntu.com oneiric Release.gpg Hit http://extras.ubuntu.com oneiric Release Hit http://archive.canonical.com oneiric Release Hit http://security.ubuntu.com oneiric-security Release Hit http://ppa.launchpad.net oneiric Release Hit http://us.archive.ubuntu.com oneiric-updates Release.gpg Hit http://us.archive.ubuntu.com oneiric-backports Release.gpg Hit http://extras.ubuntu.com oneiric/main Sources Hit http://archive.canonical.com oneiric/partner i386 Packages Hit http://security.ubuntu.com oneiric-security/main Sources Hit http://ppa.launchpad.net oneiric/main Sources Hit http://us.archive.ubuntu.com oneiric-proposed Release.gpg Hit http://extras.ubuntu.com oneiric/main i386 Packages Ign http://extras.ubuntu.com oneiric/main TranslationIndex Hit http://ppa.launchpad.net oneiric/main i386 Packages Ign http://ppa.launchpad.net oneiric/main TranslationIndex Ign http://archive.canonical.com oneiric/partner TranslationIndex Hit http://security.ubuntu.com oneiric-security/restricted Sources Hit http://security.ubuntu.com oneiric-security/universe Sources Hit http://security.ubuntu.com oneiric-security/multiverse Sources Hit http://security.ubuntu.com oneiric-security/main i386 Packages Hit http://security.ubuntu.com oneiric-security/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric Release Hit http://us.archive.ubuntu.com oneiric-updates Release Hit http://security.ubuntu.com oneiric-security/universe i386 Packages Hit http://security.ubuntu.com oneiric-security/multiverse i386 Packages Hit http://security.ubuntu.com oneiric-security/main TranslationIndex Hit http://security.ubuntu.com oneiric-security/multiverse TranslationIndex Hit http://security.ubuntu.com oneiric-security/restricted TranslationIndex Hit http://security.ubuntu.com oneiric-security/universe TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports Release Hit http://security.ubuntu.com oneiric-security/main Translation-en Hit http://security.ubuntu.com oneiric-security/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric-proposed Release Hit http://us.archive.ubuntu.com oneiric/main Sources Hit http://us.archive.ubuntu.com oneiric/restricted Sources Hit http://us.archive.ubuntu.com oneiric/universe Sources Hit http://us.archive.ubuntu.com oneiric/multiverse Sources Hit http://security.ubuntu.com oneiric-security/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric/universe i386 Packages Hit http://us.archive.ubuntu.com oneiric/multiverse i386 Packages Hit http://us.archive.ubuntu.com oneiric/main TranslationIndex Hit http://us.archive.ubuntu.com oneiric/multiverse TranslationIndex Hit http://us.archive.ubuntu.com oneiric/restricted TranslationIndex Hit http://us.archive.ubuntu.com oneiric/universe TranslationIndex Hit http://us.archive.ubuntu.com oneiric-updates/main Sources Hit http://us.archive.ubuntu.com oneiric-updates/restricted Sources Hit http://security.ubuntu.com oneiric-security/universe Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/universe Sources Hit http://us.archive.ubuntu.com oneiric-updates/multiverse Sources Hit http://us.archive.ubuntu.com oneiric-updates/main i386 Packages Hit http://us.archive.ubuntu.com oneiric-updates/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric-updates/universe i386 Packages Hit http://us.archive.ubuntu.com oneiric-updates/multiverse i386 Packages Hit http://us.archive.ubuntu.com oneiric-updates/main TranslationIndex Hit http://us.archive.ubuntu.com oneiric-updates/multiverse TranslationIndex Hit http://us.archive.ubuntu.com oneiric-updates/restricted TranslationIndex Hit http://us.archive.ubuntu.com oneiric-updates/universe TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports/main Sources Hit http://us.archive.ubuntu.com oneiric-backports/restricted Sources Hit http://us.archive.ubuntu.com oneiric-backports/universe Sources Hit http://us.archive.ubuntu.com oneiric-backports/multiverse Sources Hit http://us.archive.ubuntu.com oneiric-backports/main i386 Packages Hit http://us.archive.ubuntu.com oneiric-backports/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric-backports/universe i386 Packages Hit http://us.archive.ubuntu.com oneiric-backports/multiverse i386 Packages Hit http://us.archive.ubuntu.com oneiric-backports/main TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports/multiverse TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports/restricted TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports/universe TranslationIndex Ign http://extras.ubuntu.com oneiric/main Translation-en_US Ign http://ppa.launchpad.net oneiric/main Translation-en_US Hit http://us.archive.ubuntu.com oneiric-proposed/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric-proposed/main i386 Packages Hit http://us.archive.ubuntu.com oneiric-proposed/multiverse i386 Packages Hit http://us.archive.ubuntu.com oneiric-proposed/universe i386 Packages Hit http://us.archive.ubuntu.com oneiric-proposed/main TranslationIndex Hit http://us.archive.ubuntu.com oneiric-proposed/multiverse TranslationIndex Hit http://us.archive.ubuntu.com oneiric-proposed/restricted TranslationIndex Hit http://us.archive.ubuntu.com oneiric-proposed/universe TranslationIndex Ign http://archive.canonical.com oneiric/partner Translation-en_US Ign http://extras.ubuntu.com oneiric/main Translation-en Ign http://ppa.launchpad.net oneiric/main Translation-en Ign http://archive.canonical.com oneiric/partner Translation-en Get:1 http://us.archive.ubuntu.com oneiric/main i386 Packages [1,583 kB] Hit http://us.archive.ubuntu.com oneiric/main Translation-en Hit http://us.archive.ubuntu.com oneiric/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric/universe Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/main Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/universe Translation-en Hit http://us.archive.ubuntu.com oneiric-backports/main Translation-en Hit http://us.archive.ubuntu.com oneiric-backports/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric-backports/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric-backports/universe Translation-en Hit http://us.archive.ubuntu.com oneiric-proposed/main Translation-en Hit http://us.archive.ubuntu.com oneiric-proposed/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric-proposed/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric-proposed/universe Translation-en Err http://us.archive.ubuntu.com oneiric/main i386 Packages 404 Not Found [IP: 91.189.92.179 80] Fetched 1 B in 2s (0 B/s) W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/oneiric/main/binary-i386/Packages 404 Not Found [IP: 91.189.92.179 80] E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • SQL Server 2008 R2 Reporting Services - The Word is But a Stage (T-SQL Tuesday #006)

    - by smisner
    Host Michael Coles (blog|twitter) has selected LOB data as the topic for this month's T-SQL Tuesday, so I'll take this opportunity to post an overview of reporting with spatial data types. As part of my work with SQL Server 2008 R2 Reporting Services, I've been exploring the use of spatial data types in the new map data region. You can create a map using any of the following data sources: Map Gallery - a set of Shapefiles for the United States only that ships with Reporting Services ESRI Shapefile - a .shp file conforming to the Environmental Systems Research Institute, Inc. (ESRI) shapefile spatial data format SQL Server spatial data - a query that includes SQLGeography or SQLGeometry data types Rob Farley (blog|twitter) points out today in his T-SQL Tuesday post that using the SQL geography field is a preferable alternative to ESRI shapefiles for storing spatial data in SQL Server. So how do you get spatial data? If you don't already have a GIS application in-house, you can find a variety of sources. Here are a few to get you started: US Census Bureau Website, http://www.census.gov/geo/www/tiger/ Global Administrative Areas Spatial Database, http://biogeo.berkeley.edu/gadm/ Digital Chart of the World Data Server, http://www.maproom.psu.edu/dcw/ In a recent post by Pinal Dave (blog|twitter), you can find a link to free shapefiles for download and a tutorial for using Shape2SQL, a free tool to convert shapefiles into SQL Server data. In my post today, I'll show you how to use combine spatial data that describes boundaries with spatial data in AdventureWorks2008R2 that identifies stores locations to embed a map in a report. Preparing the spatial data First, I downloaded Shapefile data for the administrative boundaries in France and unzipped the data to a local folder. Then I used Shape2SQL to upload the data into a SQL Server database called Spatial. I'm not sure of the reason why, but I had to uncheck the option to create a spatial index to upload the data. Otherwise, the upload appeared to run successfully, but no table appeared in my database. The zip file that I downloaded contained three files, but I didn't know what was in them until I used Shape2SQL to upload the data into tables. Then I found that FRA_adm0 contains spatial data for the country of France, FRA_adm1 contains spatial data for each region, and FRA_adm2 contains spatial data for each department (a subdivision of region). Next I prepared my SQL query containing sales data for fictional stores selling Adventure Works products in France. The Person.Address table in the AdventureWorks2008R2 database (which you can download from Codeplex) contains a SpatialLocation column which I joined - along with several other tables - to the Sales.Customer and Sales.Store tables. I'll be able to superimpose this data on a map to see where these stores are located. I included the SQL script for this query (as well as the spatial data for France) in the downloadable project that I created for this post. Step 1: Using the Map Wizard to Create a Map of France You can build a map without using the wizard, but I find it's rather useful in this case. Whether you use Business Intelligence Development Studio (BIDS) or Report Builder 3.0, the map wizard is the same. I used BIDS so that I could create a project that includes all the files related to this post. To get started, I added an empty report template to the project and named it France Stores. Then I opened the Toolbox window and dragged the Map item to the report body which starts the wizard. Here are the steps to perform to create a map of France: On the Choose a source of spatial data page of the wizard, select SQL Server spatial query, and click Next. On the Choose a dataset with SQL Server spatial data page, select Add a new dataset with SQL Server spatial data. On the Choose a connection to a SQL Server spatial data source page, select New. In the Data Source Properties dialog box, on the General page, add a connecton string like this (changing your server name if necessary): Data Source=(local);Initial Catalog=Spatial Click OK and then click Next. On the Design a query page, add a query for the country shape, like this: select * from fra_adm1 Click Next. The map wizard reads the spatial data and renders it for you on the Choose spatial data and map view options page, as shown below. You have the option to add a Bing Maps layer which shows surrounding countries. Depending on the type of Bing Maps layer that you choose to add (from Road, Aerial, or Hybrid) and the zoom percentage you select, you can view city names and roads and various boundaries. To keep from cluttering my map, I'm going to omit the Bing Maps layer in this example, but I do recommend that you experiment with this feature. It's a nice integration feature. Use the + or - button to rexize the map as needed. (I used the + button to increase the size of the map until its edges were just inside the boundaries of the visible map area (which is called the viewport). You can eliminate the color scale and distance scale boxes that appear in the map area later. Select the Embed map data in this report for faster rendering. The spatial data won't be changing, so there's no need to leave it in the database. However, it does increase the size of the RDL. Click Next. On the Choose map visualization page, select Basic Map. We'll add data for visualization later. For now, we have just the outline of France to serve as the foundation layer for our map. Click Next, and then click Finish. Now click the color scale box in the lower left corner of the map, and press the Delete key to remove it. Then repeat to remove the distance scale box in the lower right corner of the map. Step 2: Add a Map Layer to an Existing Map The map data region allows you to add multiple layers. Each layer is associated with a different data set. Thus far, we have the spatial data that defines the regional boundaries in the first map layer. Now I'll add in another layer for the store locations by following these steps: If the Map Layers windows is not visible, click the report body, and then click twice anywhere on the map data region to display it. Click on the New Layer Wizard button in the Map layers window. And then we start over again with the process by choosing a spatial data source. Select SQL Server spatial query, and click Next. Select Add a new dataset with SQL Server spatial data, and click Next. Click New, add a connection string to the AdventureWorks2008R2 database, and click Next. Add a query with spatial data (like the one I included in the downloadable project), and click Next. The location data now appears as another layer on top of the regional map created earlier. Use the + button to resize the map again to fill as much of the viewport as possible without cutting off edges of the map. You might need to drag the map within the viewport to center it properly. Select Embed map data in this report, and click Next. On the Choose map visualization page, select Basic Marker Map, and click Next. On the Choose color theme and data visualization page, in the Marker drop-down list, change the marker to diamond. There's no particular reason for a diamond; I think it stands out a little better than a circle on this map. Clear the Single color map checkbox as another way to distinguish the markers from the map. You can of course create an analytical map instead, which would change the size and/or color of the markers according to criteria that you specify, such as sales volume of each store, but I'll save that exploration for another post on another day. Click Finish and then click Preview to see the rendered report. Et voilà...c'est fini. Yes, it's a very simple map at this point, but there are many other things you can do to enhance the map. I'll create a series of posts to explore the possibilities. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • I Can't Install or Remove Any Application

    - by berkay gürsoy
    when i try to install or remove an application via either software center or apt-get install they both fail and give some debconf errors below is the log please help.Sorry some of the text is not english. sudo apt-get install aptitude Paket listeleri okunuyor... Bitti Bagimlilik agaci insa ediliyor. Durum bilgisi okunuyor... Bitti Asagidaki ek paketler de yüklenecek: aptitude-common libboost-iostreams1.49.0 libcwidget3 Önerilen paketler: aptitude-doc-en aptitude-doc tasksel debtags libcwidget-dev Asagidaki YENI paketler kurulacak: aptitude aptitude-common libboost-iostreams1.49.0 libcwidget3 Yükseltilen: 0, Yeni Kurulan: 4, Kaldirilacak: 0 ve Yükseltilmeyecek: 48. 8 tam olarak kurulmadi veya kaldirilmadi. Indirilmesi gereken dosya boyutu 0 B/2.498 kB Bu islemden sonra 10,4 MB ek disk alani kullanilacak. Devam etmek istiyor musunuz [E/h]? e Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 44, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in -e at /usr/share/perl5/Debconf/DbDriver/File.pm line 46, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in pattern match (m//) at /usr/share/perl5/Debconf/DbDriver/File.pm line 47, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in -d at /usr/share/perl5/Debconf/DbDriver/File.pm line 48, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 49, <DEBCONF_CONFIG> chunk 3. debconf: DbDriver "config": mkdir :Böyle bir dosya ya da dizin yok Selecting previously unselected package aptitude-common. dpkg: uyari: files list file for package 'aspell' missing; assuming package has no files currently installed dpkg: uyari: files list file for package 'ubuntu-desktop' missing; assuming package has no files currently installed dpkg: uyari: files list file for package 'vuze' missing; assuming package has no files currently installed dpkg: uyari: files list file for package 'java-wrappers' missing; assuming package has no files currently installed (Veritabani okunuyor... 198988 files and directories currently installed.) Unpacking aptitude-common (from .../aptitude-common_0.6.8.1-2ubuntu1_all.deb) ... Selecting previously unselected package libboost-iostreams1.49.0. Unpacking libboost-iostreams1.49.0 (from .../libboost-iostreams1.49.0_1.49.0-3.1ubuntu1_amd64.deb) ... Selecting previously unselected package libcwidget3. Unpacking libcwidget3 (from .../libcwidget3_0.5.16-3.4ubuntu1_amd64.deb) ... Selecting previously unselected package aptitude. Unpacking aptitude (from .../aptitude_0.6.8.1-2ubuntu1_amd64.deb) ... wicd-daemon (1.7.2.4-2ubuntu1) kuruluyor... Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 44, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in -e at /usr/share/perl5/Debconf/DbDriver/File.pm line 46, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in pattern match (m//) at /usr/share/perl5/Debconf/DbDriver/File.pm line 47, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in -d at /usr/share/perl5/Debconf/DbDriver/File.pm line 48, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 49, <DEBCONF_CONFIG> chunk 3. debconf: DbDriver "config": mkdir :Böyle bir dosya ya da dizin yok dpkg: error processing wicd-daemon (--configure): installed post-installation script alt islemi çikis durumunda hata döndürdü : 1 man-db (2.6.3-1) kuruluyor... Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 44, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in -e at /usr/share/perl5/Debconf/DbDriver/File.pm line 46, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in pattern match (m//) at /usr/share/perl5/Debconf/DbDriver/File.pm line 47, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in -d at /usr/share/perl5/Debconf/DbDriver/File.pm line 48, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 49, <DEBCONF_CONFIG> chunk 3. debconf: DbDriver "config": mkdir :Böyle bir dosya ya da dizin yok dpkg: error processing man-db (--configure): installed post-installation script alt islemi çikis durumunda hata döndürdü : 1 dictionaries-common (1.12.10) kuruluyor... Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 44, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in -e at /usr/share/perl5/Debconf/DbDriver/File.pm line 46, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in pattern match (m//) at /usr/share/perl5/Debconf/DbDriver/File.pm line 47, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in -d at /usr/share/perl5/Debconf/DbDriver/File.pm line 48, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 49, <DEBCONF_CONFIG> chunk 3. debconf: DbDriver "config": mkdir :Böyle bir dosya ya da dizin yok dpkg: error processing dictionaries-common (--configure): installed post-installation script alt islemi çikis durumunda hata döndürdü : 1 dpkg: dependency problems prevent configuration of aspell: aspell depends on dictionaries-common (>> 0.40); bununla beraber: Package dictionaries-common is not configured yet. dpkg: error processing aspell (--configure): bagimlilik sorunlari - yapilandirilmadan birakiliyor dpkg: dependency problems prevent configuration of aspell-en: aspell-en depends on aspell (>= 0.60.3-2); bununla beraber: Package aspell is not configured yet. aspell-en depends on dictionaries-common (>= 0.49.2); bununla beraber: Package dictionaries-common is not configured yet. dpkg: error processing aspell-en (--configure): bagimlilik sorunlari - yapilandirilmadan birakiliyor dpkg: dependency problems prevent configuration of hyphen-en-us: hyphen-en-us depends on dictionaries-common (>= 0.10) | openoffice.org-updatedicts; bununla beraber: Package dictionaries-common is not configured yet. openoffice.org-updatedicts paketi yüklenmedi. Package dictionaries-common which provides openoffice.org-updatedicts is not configured yet. dpkg: error processing hyphen-en-us (--configure): bagimlilik sorunlari - yapilandirilmadan birakiliyor dpkg: dependency problems prevent configuration of wicd-gtk: wicd-gtk depends on wicd-daemon (= 1.7.2.4-2ubuntu1); bununla beraber: Package wicd-daemon is not configured yet. dpkg: error processing wicd-gtk (--configure): bagimlilik sorunlari - yapilandirilmadan birakiliyor dpkg: dependency problems prevent configuration of wicd: wicd depends on wicd-daemon (= 1.7.2.4-2ubuntu1); bununla beraber: Package wicd-daemon is not configured yet. wicd depends on wicd-gtk (= 1.7.2.4-2ubuntu1) | wicd-curses (= 1.7.2.4-2ubuntu1) | wicd-cli (= 1.7.2.4-2ubuntu1) | wicd-client; bununla beraber: Package wicd-gtk is not configured yet. wicd-curses paketi yüklenmedi. wicd-cli paketi yüklenmedi. wicd-client paketi yüklenmedi. Package wicd-gtk which provides wicd-client is not configured yet. dpkg: error processing wicd (--configure): bagimlilik sorunlari - yapilandirilmadan birakiliyor aptitude-common (0.6.8.1-2ubuntu1) kuruluyor... libboost-iostreams1.49.0 (1.49.0-3.1ubuntu1) kuruluyor... libcwidget3 (0.5.16-3.4ubuntu1) kuruluyor... aptitude (0.6.8.1-2ubuntu1) kuruluyor... update-alternatives: using /usr/bin/aptitude-curses to provide /usr/bin/aptitude (aptitude) in Otomatik Mod Processing triggers for libc-bin ... ldconfig deferred processing now taking place Islem sirasinda hatalar bulundu: wicd-daemon man-db dictionaries-common aspell aspell-en hyphen-en-us wicd-gtk wicd E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Accessing SharePoint 2010 Data with REST/OData on Windows Phone 7

    - by Jan Tielens
    Consuming SharePoint 2010 data in Windows Phone 7 applications using the CTP version of the developer tools is quite a challenge. The issue is that the SharePoint 2010 data is not anonymously available; users need to authenticate to be able to access the data. When I first tried to access SharePoint 2010 data from my first Hello-World-type Windows Phone 7 application I thought “Hey, this should be easy!” because Windows Phone 7 development based on Silverlight and SharePoint 2010 has a Client Object Model for Silverlight. Unfortunately you can’t use the Client Object Model of SharePoint 2010 on the Windows Phone platform; there’s a reference to an assembly that’s not available (System.Windows.Browser). My second thought was “OK, no problem!” because SharePoint 2010 also exposes a REST/OData API to access SharePoint data. Using the REST API in SharePoint 2010 is as easy as making a web request for a URL (in which you specify the data you’d like to retrieve), e.g. http://yoursiteurl/_vti_bin/listdata.svc/Announcements. This is very easy to accomplish in a Silverlight application that’s running in the context of a page in a SharePoint site, because the credentials of the currently logged on user are automatically picked up and passed to the WCF service. But a Windows Phone application is of course running outside of the SharePoint site’s page, so the application should build credentials that have to be passed to SharePoint’s WCF service. This turns out to be a small challenge in Silverlight 3, the WebClient doesn’t support authentication; there is a Credentials property but when you set it and make the request you get a NotImplementedException exception. Probably this issued will be solved in the very near future, since Silverlight 4 does support authentication, and there’s already a WCF Data Services download that uses this new platform feature of Silverlight 4. So when Windows Phone platform switches to Silverlight 4, you can just use the WebClient to get the data. Even more, if the OData Client Library for Windows Phone 7 gets updated after that, things should get even easier! By the way: the things I’m writing in this paragraph are just assumptions that I make which make a lot of sense IMHO, I don’t have any info all of this will happen, but I really hope so. So are SharePoint developers out of the Windows Phone development game until they get this fixed? Well luckily not, when the HttpWebRequest class is being used instead, you can pass credentials! Using the HttpWebRequest class is slightly more complex than using the WebClient class, but the end result is that you have access to your precious SharePoint 2010 data. The following code snippet is getting all the announcements of an Annoucements list in a SharePoint site: HttpWebRequest webReq =     (HttpWebRequest)HttpWebRequest.Create("http://yoursite/_vti_bin/listdata.svc/Announcements");webReq.Credentials = new NetworkCredential("username", "password"); webReq.BeginGetResponse(    (result) => {        HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;         XDocument xdoc = XDocument.Load(            ((HttpWebResponse)asyncReq.EndGetResponse(result)).GetResponseStream());         XNamespace ns = "http://www.w3.org/2005/Atom";        var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)                MessageBox.Show(item.Title);        });    }, webReq); When you try this in a Windows Phone 7 application, make sure you add a reference to the System.Xml.Linq assembly, because the code uses Linq to XML to parse the resulting Atom feed, so the Title of every announcement is being displayed in a MessageBox. Check out my previous post if you’d like to see a more polished sample Windows Phone 7 application that displays SharePoint 2010 data.When you plan to use this technique, it’s of course a good idea to encapsulate the code doing the request, so it becomes really easy to get the data that you need. In the following code snippet you can find the GetAtomFeed method that gets the contents of any Atom feed, even if you need to authenticate to get access to the feed. delegate void GetAtomFeedCallback(Stream responseStream); public MainPage(){    InitializeComponent();     SupportedOrientations = SupportedPageOrientation.Portrait |         SupportedPageOrientation.Landscape;     string url = "http://yoursite/_vti_bin/listdata.svc/Announcements";    string username = "username";    string password = "password";    string domain = "";     GetAtomFeed(url, username, password, domain, (s) =>    {        XNamespace ns = "http://www.w3.org/2005/Atom";        XDocument xdoc = XDocument.Load(s);         var items = from item in xdoc.Root.Elements(ns + "entry")                    select new { Title = item.Element(ns + "title").Value };         this.Dispatcher.BeginInvoke(() =>        {            foreach (var item in items)            {                MessageBox.Show(item.Title);            }        });    });} private static void GetAtomFeed(string url, string username,     string password, string domain, GetAtomFeedCallback cb){    HttpWebRequest webReq = (HttpWebRequest)HttpWebRequest.Create(url);    webReq.Credentials = new NetworkCredential(username, password, domain);     webReq.BeginGetResponse(        (result) =>        {            HttpWebRequest asyncReq = (HttpWebRequest)result.AsyncState;            HttpWebResponse resp = (HttpWebResponse)asyncReq.EndGetResponse(result);            cb(resp.GetResponseStream());        }, webReq);}

    Read the article

  • Extract data from specific range of cells in multiple worksheet in multiple files.

    - by Michele
    Extract data from specific range of cells(always the same cells) in multiple worksheet in multiple files. 1 file=1 day. I have 6 technicians each day of the week, Monday thru Friday. So, 5 files with 6 worksheets. I have entered specific info in specific cells of every work sheet. The range is constant(the same address in EVERY worksheet in every file.) So, I need a formula to extract and calculate the data in the given range and dump it into another spreadsheet. I can forward an example a file if it will help anyone to answer my question. Or more explanation if necessary is available upon request. JUST PLEASE SOMEBODY HELP ME!!!!! Thank you all in advance. Regards, Michele

    Read the article

  • How can I recover [data from] my failing USB key?

    - by moe37x3
    I have a Corsair Flash Voyager USB key, and it's almost completely failed. When I plug it into my [WinXP] computer, the OS mounts it and open up explorer to the drive's root directory. However, if I try to copy any data off, I get an error message saying that the device is not there. If I leave it plugged in, the OS seems to oscillate between seeing it and not seeing it, since the "Safely Remove Hardware" tray icon appears and disappears every few seconds. The damage was probably caused by my abuse, either from plugging it in with my keys hanging off of it or from losing the cap and keeping it in my pocket uncapped. Is there anything I can do to save the data from it or even rehabilitate the drive?

    Read the article

  • Changing Word mail merge data source locations in bulk?

    - by Daft Viking
    I've just moved a number of Word mail merge files, and a number of Excel spreadsheets that are the data sources for the mail merges, from a Windows XP computer to a Windows 7 computer, and now all the paths for the merge sources are incorrect (used to be c:\documents and settings\user\my documents.... now c:\users\documents....). While I can correct the path of the data source in each file individually, I was hoping that there would be some way of updating the files in bulk, as there are a relatively large number of them. Word 2007 is what is being used, but the documents are all in the previous DOC format (not DOCX).

    Read the article

  • Does replacing chrome User Data with my own - works without leaving any trace behind? Where else chrome writes data outside of User Data folder?

    - by Selin Peck
    Does replacing chrome User Data with my own - works without leaving any trace behind? Where else chrome writes data outside of User Data folder? I used to start office work by removing chrome User Data, replacing it with my own User Data copied from my external drive, saving the original User Data to other folder. Before leaving in the evening, I will take back my own User Data, and bring back the original User Data where it is originally saved. Is this process advisable? Would I be safe this way or if not, where else does chrome save data outside of User Data folder in AppData? Also, how is the process in Mozilla Firefox?

    Read the article

  • How do I populate multiple records of data into a PDF form like a mail-merge?

    - by user38801
    I have Acrobat Pro, and I have a PDF with a form on it. Assuming the fields in the form correspond to a data source (like rows in an RDBMS table or xml file), I want to then print multiple copies of the PDF file, with each copy having the values of a different row in the data source. It is preferable to directly interface with an actual database, rather than having to save an XML file every time I do this. If this involves programming that's cool too, I only posted here because the question didn't seem appropriate for StackOverflow. Thanks!

    Read the article

  • Ways to improve completeness of files for data recovery and scanning?

    - by SteveO
    I am using R-studio for data recovery on one of my ntfs partition. There is a pdf file about 16MB, but the software can only recover 15MB of it. So I am thinking about what ways can be used to improve the quality of scanning and recovery by the software? I am looking around its preferences. I am not quite sure whether there are some adjustable parameters for scanning and recovery which can be fine-tuned to improve the quality? R-studio has a free demo version, for which scanning is free,but recovery isn't. It is downloadable from http://www.data-recovery-software.net/Data_Recovery_Download.shtml Its manual is here http://www.r-tt.com/downloads/Recovery_Manual.pdf. I have tried my best to search for answers in the manual, but failed to find one. Their technical support is not as good as their software, and helpless usually in my opinion. Thanks!

    Read the article

  • Get the Dynamic table data from gui in selenium webDriver

    - by Rabindra
    I am working on a web based Application that I am testing with Selenium. On one page the content is dynamically loaded in table. I want to get the Table data, i am geting a "org.openqa.selenium.NullPointerElementException" in this line. WebElement table = log.driver.findElement(By.xpath(tableXpath)); I tried the following complete code. public int selectfromtable(String tableXpath, String CompareValue, int columnnumber) throws Exception { WebElement table = log.driver.findElement(By.xpath(tableXpath)); List<WebElement> rows = table.findElements(By.tagName("tr")); int flag = 0; for (WebElement row : rows) { List<WebElement> cells = row.findElements(By.tagName("td")); if (!cells.isEmpty() && cells.get(columnnumber).getText().equals(CompareValue)) { flag = 1; Thread.sleep(1000); break; } else { Thread.sleep(2000); flag = 0; } } return flag; } I am calling the above method like String tableXpath = ".//*[@id='event_list']/form/div[1]/table/tbody/tr/td/div/table"; selectfromtable(tableXpath, eventType, 3); my html page is like <table width="100%"> <tbody style="overflow: auto; background-color: #FFFFFF"> <tr class="trOdd"> <td width="2%" align="center"> <td width="20%" align="center"> Account </td> <td width="20%" align="center"> Enter Collection </td> <td width="20%" align="center"> <td width="20%" align="center"> 10 </td> <td width="20%" align="center"> 1 </td> </tr> </tbody> <tbody style="overflow: auto; background-color: #FFFFFF"> <tr class="trEven"> <td width="2%" align="center"> <td width="20%" align="center"> Account </td> <td width="20%" align="center"> Resolved From Collection </td> <td width="20%" align="center"> <td width="20%" align="center"> 10 </td> <td width="20%" align="center"> 1 </td> </tr> </tbody> </table>

    Read the article

  • Using Microsoft's Chart Controls In An ASP.NET Application: Serializing Chart Data

    In most usage scenarios, the data displayed in a Microsoft Chart control comes from some dynamic source, such as from a database query. The appearance of the chart can be modified dynamically, as well; past installments in this article series showed how to programmatically customize the axes, labels, and other appearance-related settings. However, it is possible to statically define the chart's data and appearance strictly through the control's declarative markup. One of the demos examined in the Getting Started article rendered a column chart with seven columns whose labels and values were defined statically in the <asp:Series> tag's <Points> collection. Given this functionality, it should come as no surprise that the Microsoft Chart Controls also support serialization. Serialization is the process of persisting the state of a control or an object to some other medium, such as to disk. Deserialization is the inverse process, and involves taking the persisted data and recreating the control or object. With just a few lines of code you can persist the appearance settings, the data, or both to a file on disk or to any stream. Likewise, it takes just a few lines of codes to reconstitute a chart from the persisted information. This article shows how to use the Microsoft Chart Control's serialization functionality by examining a demo application that allows users to create custom charts, specifying the data to plot and some appearance-related settings. The user can then save a "snapshot" of this chart, which persists its appearance and data to a record in a database. From another page, users can view these saved chart snapshots. Read on to learn more! Read More >

    Read the article

  • Oracle Database Security Protecting the Oracle IRM Schema

    - by Simon Thorpe
    Acquiring the Information Rights Management technology in 2006 was part of Oracle's strategic security vision and IRM compliments nicely the overall Oracle security set of solutions. A year ago I spoke about how Oracle has solutions that can help companies protect information throughout its entire life cycle. With our acquisition of Sun this set of solutions has solidified and has even extended down to the operating system and hardware level. Oracle can now offer customers technology that protects their data from the disk, through the database to documents on the desktop! With the recent release of Oracle IRM 11g I was tasked to configure demonstration and evaluation environments and I thought it would make a nice story to leverage some of the security features in the latest release of the Oracle Database. After building these environments I thought I would put together a simple video demonstrating how both Database Advanced Security and Information Rights Management combined can provide a very secure platform for protecting your information. Have a look at the following which highlights these database security options.Transparent Data Encryption protecting the communication from the Oracle IRM server to the Database server. Encryption techniques provide confidentiality and integrity of the data passing to and from the IRM service on the back end. Transparent Data Encryption protecting the Oracle IRM database schema. Encryption is used to provide confidentiality of the IRM data whilst it resides at rest in the database table space. Database Vault is used to ensure only the Oracle IRM service has access to query and update the information that resides in the database. This is an excellent method of ensuring that database administrators cannot look at or make changes to the Oracle IRM database whilst retaining their ability to administrate the database. The last thing you want after deploying an IRM solution is for a curious or unhappy DBA to run a query that grants them rights to your company financial data or documents pertaining to a merger or acquisition.

    Read the article

  • Join our webcast: Discover What’s New in Oracle Data Integrator and Oracle GoldenGate

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";} Data integration team has organized a series of webcasts for this summer. We are kicking it off this Thursday June 30th at 10am PT with a product update webcast: Discover What’s New in Oracle Data Integrator and Oracle GoldenGate. In this webcast you will hear from product management about the new patch updates to both GoldenGate 11g R1 and ODI 11gR1. Jeff Pollock, Sr. Director of Product Management for ODI will talk about the new features in Oracle Data Integrator 11.1.1.5, including the data lineage integration with OBI EE, enhanced web services to support flexible architectures as well as capabilities for efficient object execution such as Load Plans. Jeff will discuss support for complex files and performance enhancements. Chris McAllister, Sr. Director of Product Management for Oracle GoldenGate will cover the new features of Oracle GoldenGate 11.1.1.1 such as increased data security by supporting Oracle Database Advanced Security option, deeper integration with Oracle Database, and the expanded list of heterogeneous databases GoldenGate supports . Chris will also talk about the new Oracle GoldenGate 11gR1 release for HP NonStop platform and will provide information on our strategic direction for product development. Join us this Thursday at 10am PT/ 1pm ET to hear directly from Data Integration Product Management . You can register here for the June 30th webcast as well as for the upcoming ones in our summer webcast series.

    Read the article

  • SQL SERVER – Concurrency Basics – Guest Post by Vinod Kumar

    - by pinaldave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions from SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. Learning is always fun when it comes to SQL Server and learning the basics again can be more fun. I did write about Transaction Logs and recovery over my blogs and the concept of simplifying the basics is a challenge. In the real world we always see checks and queues for a process – say railway reservation, banks, customer supports etc there is a process of line and queue to facilitate everyone. Shorter the queue higher is the efficiency of system (a.k.a higher is the concurrency). Every database does implement this using checks like locking, blocking mechanisms and they implement the standards in a way to facilitate higher concurrency. In this post, let us talk about the topic of Concurrency and what are the various aspects that one needs to know about concurrency inside SQL Server. Let us learn the concepts as one-liners: Concurrency can be defined as the ability of multiple processes to access or change shared data at the same time. The greater the number of concurrent user processes that can be active without interfering with each other, the greater the concurrency of the database system. Concurrency is reduced when a process that is changing data prevents other processes from reading that data or when a process that is reading data prevents other processes from changing that data. Concurrency is also affected when multiple processes are attempting to change the same data simultaneously. Two approaches to managing concurrent data access: Optimistic Concurrency Model Pessimistic Concurrency Model Concurrency Models Pessimistic Concurrency Default behavior: acquire locks to block access to data that another process is using. Assumes that enough data modification operations are in the system that any given read operation is likely affected by a data modification made by another user (assumes conflicts will occur). Avoids conflicts by acquiring a lock on data being read so no other processes can modify that data. Also acquires locks on data being modified so no other processes can access the data for either reading or modifying. Readers block writer, writers block readers and writers. Optimistic Concurrency Assumes that there are sufficiently few conflicting data modification operations in the system that any single transaction is unlikely to modify data that another transaction is modifying. Default behavior of optimistic concurrency is to use row versioning to allow data readers to see the state of the data before the modification occurs. Older versions of the data are saved so a process reading data can see the data as it was when the process started reading and not affected by any changes being made to that data. Processes modifying the data is unaffected by processes reading the data because the reader is accessing a saved version of the data rows. Readers do not block writers and writers do not block readers, but, writers can and will block writers. Transaction Processing A transaction is the basic unit of work in SQL Server. Transaction consists of SQL commands that read and update the database but the update is not considered final until a COMMIT command is issued (at least for an explicit transaction: marked with a BEGIN TRAN and the end is marked by a COMMIT TRAN or ROLLBACK TRAN). Transactions must exhibit all the ACID properties of a transaction. ACID Properties Transaction processing must guarantee the consistency and recoverability of SQL Server databases. Ensures all transactions are performed as a single unit of work regardless of hardware or system failure. A – Atomicity C – Consistency I – Isolation D- Durability Atomicity: Each transaction is treated as all or nothing – it either commits or aborts. Consistency: ensures that a transaction won’t allow the system to arrive at an incorrect logical state – the data must always be logically correct.  Consistency is honored even in the event of a system failure. Isolation: separates concurrent transactions from the updates of other incomplete transactions. SQL Server accomplishes isolation among transactions by locking data or creating row versions. Durability: After a transaction commits, the durability property ensures that the effects of the transaction persist even if a system failure occurs. If a system failure occurs while a transaction is in progress, the transaction is completely undone, leaving no partial effects on data. Transaction Dependencies In addition to supporting all four ACID properties, a transaction might exhibit few other behaviors (known as dependency problems or consistency problems). Lost Updates: Occur when two processes read the same data and both manipulate the data, changing its value and then both try to update the original data to the new value. The second process might overwrite the first update completely. Dirty Reads: Occurs when a process reads uncommitted data. If one process has changed data but not yet committed the change, another process reading the data will read it in an inconsistent state. Non-repeatable Reads: A read is non-repeatable if a process might get different values when reading the same data in two reads within the same transaction. This can happen when another process changes the data in between the reads that the first process is doing. Phantoms: Occurs when membership in a set changes. It occurs if two SELECT operations using the same predicate in the same transaction return a different number of rows. Isolation Levels SQL Server supports 5 isolation levels that control the behavior of read operations. Read Uncommitted All behaviors except for lost updates are possible. Implemented by allowing the read operations to not take any locks, and because of this, it won’t be blocked by conflicting locks acquired by other processes. The process can read data that another process has modified but not yet committed. When using the read uncommitted isolation level and scanning an entire table, SQL Server can decide to do an allocation order scan (in page-number order) instead of a logical order scan (following page pointers). If another process doing concurrent operations changes data and move rows to a new location in the table, the allocation order scan can end up reading the same row twice. Also can happen if you have read a row before it is updated and then an update moves the row to a higher page number than your scan encounters later. Performing an allocation order scan under Read Uncommitted can cause you to miss a row completely – can happen when a row on a high page number that hasn’t been read yet is updated and moved to a lower page number that has already been read. Read Committed Two varieties of read committed isolation: optimistic and pessimistic (default). Ensures that a read never reads data that another application hasn’t committed. If another transaction is updating data and has exclusive locks on data, your transaction will have to wait for the locks to be released. Your transaction must put share locks on data that are visited, which means that data might be unavailable for others to use. A share lock doesn’t prevent others from reading but prevents them from updating. Read committed (snapshot) ensures that an operation never reads uncommitted data, but not by forcing other processes to wait. SQL Server generates a version of the changed row with its previous committed values. Data being changed is still locked but other processes can see the previous versions of the data as it was before the update operation began. Repeatable Read This is a Pessimistic isolation level. Ensures that if a transaction revisits data or a query is reissued the data doesn’t change. That is, issuing the same query twice within a transaction cannot pickup any changes to data values made by another user’s transaction because no changes can be made by other transactions. However, this does allow phantom rows to appear. Preventing non-repeatable read is a desirable safeguard but cost is that all shared locks in a transaction must be held until the completion of the transaction. Snapshot Snapshot Isolation (SI) is an optimistic isolation level. Allows for processes to read older versions of committed data if the current version is locked. Difference between snapshot and read committed has to do with how old the older versions have to be. It’s possible to have two transactions executing simultaneously that give us a result that is not possible in any serial execution. Serializable This is the strongest of the pessimistic isolation level. Adds to repeatable read isolation level by ensuring that if a query is reissued rows were not added in the interim, i.e, phantoms do not appear. Preventing phantoms is another desirable safeguard, but cost of this extra safeguard is similar to that of repeatable read – all shared locks in a transaction must be held until the transaction completes. In addition serializable isolation level requires that you lock data that has been read but also data that doesn’t exist. Ex: if a SELECT returned no rows, you want it to return no. rows when the query is reissued. This is implemented in SQL Server by a special kind of lock called the key-range lock. Key-range locks require that there be an index on the column that defines the range of values. If there is no index on the column, serializable isolation requires a table lock. Gets its name from the fact that running multiple serializable transactions at the same time is equivalent of running them one at a time. Now that we understand the basics of what concurrency is, the subsequent blog posts will try to bring out the basics around locking, blocking, deadlocks because they are the fundamental blocks that make concurrency possible. Now if you are with me – let us continue learning for SQL Server Locking Basics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Concurrency

    Read the article

  • Methodology behind fetching large XML data sets in pieces

    - by Jerry Dodge
    I am working on an HTTP Server in Delphi which simply sends back a custom XML dataset. I am not following any type of standard formatting, such as SOAP. I have the system working seamlessly, except one small flaw: When I have a very large dataset to send back to the client, it might take up to 2 minutes for all the data to be transferred. The HTTP Server I'm building is essentially an XML Data based API around a database, implementing the common business rule - therefore, the requests are specific to the data behind the system. When, for example, I fetch a large set of product data, I would like to break this down and send it back piece by piece. However, a single HTTP request calls for a single response. I can't necessarily keep feeding the client with multiple different XML packets unless the client explicitly requests it. I don't have any session management, but rather an API Key. I know if I had sessions, I could keep-alive a dataset temporarily for a client, and they could request bits and pieces of it. However, without session management, I would have to execute the SQL query multiple times (for each chunk of data), and in the mean-time, if that data changes, the "pages" might get messed up, therefore causing items to show on the wrong pages, after navigating to a different page. So how is this commonly handled? What's the methodology behind breaking down a large XML dataset into chunks to save the load?

    Read the article

  • Announcing Sesame Data Browser

    - by Fabrice Marguerie
    At the occasion of MIX10, which is currently taking place in Las Vegas, I'd like to announce Sesame Data Browser.Sesame will be a suite of tools for dealing with data, and Sesame Data Browser will be the first tool from that suite.Today, during the second MIX10 keynote, Microsoft demonstrated how they are pushing hard to get OData adopted. If you don't know about OData, you can visit the just revamped dedicated website: http://odata.org. There you'll find about the OData protocol, which allows you to publish and consume data on the web, the OData SDK (with client libraries for .NET, Java, Javascript, PHP, iPhone, and more), a list of OData producers, and a list of OData consumers.This is where Sesame Data Browser comes into play. It's one of the tools you can use today to consume OData.I'll let you have a look, but be aware that this is just a preview and many additional features are coming soon.Sesame Data Browser is part of a bigger picture than just OData that will take shape over the coming months. Sesame is a project I've been working on for many months now, so what you see now is just a start :-)I hope you'll enjoy what you see. Let me know what you think.

    Read the article

  • Using Microsoft's Chart Controls In An ASP.NET Application: Serializing Chart Data

    In most usage scenarios, the data displayed in a Microsoft Chart control comes from some dynamic source, such as from a database query. The appearance of the chart can be modified dynamically, as well; past installments in this article series showed how to programmatically customize the axes, labels, and other appearance-related settings. However, it is possible to statically define the chart's data and appearance strictly through the control's declarative markup. One of the demos examined in the Getting Started article rendered a column chart with seven columns whose labels and values were defined statically in the <asp:Series> tag's <Points> collection. Given this functionality, it should come as no surprise that the Microsoft Chart Controls also support serialization. Serialization is the process of persisting the state of a control or an object to some other medium, such as to disk. Deserialization is the inverse process, and involves taking the persisted data and recreating the control or object. With just a few lines of code you can persist the appearance settings, the data, or both to a file on disk or to any stream. Likewise, it takes just a few lines of codes to reconstitute a chart from the persisted information. This article shows how to use the Microsoft Chart Control's serialization functionality by examining a demo application that allows users to create custom charts, specifying the data to plot and some appearance-related settings. The user can then save a "snapshot" of this chart, which persists its appearance and data to a record in a database. From another page, users can view these saved chart snapshots. Read on to learn more! Read More >

    Read the article

  • “Big Data” Is A Small Concept Unless You Can Apply It To The Customer Experience

    - by Michael Hylton
    There’s been a lot of recent talk in the industry about “big data”.  Much can be said about the importance of big data and the results from it, but you need to always consider the customer experience when analyzing and applying customer data. Personalization and merchandising drive the user experience.  Big data should enable you to gain valuable insight into each of your customers and apply that insight at the moment they are on your Web site, talking to one of your call center agents, or any other touchpoint.  While past customer experience is important, you need to combine that with what your customer is doing on your Web site now as well what they are doing and saying on social networking sites.  It’s key to have a 360 degree view of your customer across all of your touchpoints in order to provide that relevant and consistent experience that they come to expect when interacting with your brand. Big data can enable you to effectively market, merchandize, and recommend the right products to the right customers and the right time.  By taking customer data and applying it to product recommendations, you have an opportunity to gain a greater share of wallet through the cross-selling and up-selling of additional products and services.  You can also build sustaining loyalty programs to continue to engage with your customers throughout their long-term relationship with your brand.

    Read the article

  • “Big Data” Is A Small Concept Unless You Can Apply It To The Customer Experience

    - by Michael Hylton
    There’s been a lot of recent talk in the industry about “big data”.  Much can be said about the importance of big data and the results from it, but you need to always consider the customer experience when analyzing and applying customer data. Personalization and merchandising drive the user experience.  Big data should enable you to gain valuable insight into each of your customers and apply that insight at the moment they are on your Web site, talking to one of your call center agents, or any other touchpoint.  While past customer experience is important, you need to combine that with what your customer is doing on your Web site now as well what they are doing and saying on social networking sites.  It’s key to have a 360 degree view of your customer across all of your touchpoints in order to provide that relevant and consistent experience that they come to expect when interacting with your brand. Big data can enable you to effectively market, merchandize, and recommend the right products to the right customers and the right time.  By taking customer data and applying it to product recommendations, you have an opportunity to gain a greater share of wallet through the cross-selling and up-selling of additional products and services.  You can also build sustaining loyalty programs to continue to engage with your customers throughout their long-term relationship with your brand.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >