Search Results

Search found 55281 results on 2212 pages for 'get set'.

Page 126/2212 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • How to set permalink of your blog post according to date and title of post?

    - by Amit
    I am having this website http://www.finalyearondesk.com . My blogs post link are set like this.. http://www.finalyearondesk.com/index.php?id=28 . I want it to set like this ... finalyearondesk.com/2011/09/22/how-to-recover-ubuntu-after-it-is-crashed/ . I am using the following function to get these posts... function get_content($id = '') { if($id != ""): $id = mysql_real_escape_string($id); $sql = "SELECT * from blog WHERE id = '$id'"; $return = '<p><a href="http://www.finalyearondesk.com/">Go back to Home page</a></p>'; echo $return; else: $sql = "select * from blog ORDER BY id DESC"; endif; $res = mysql_query($sql) or die(mysql_error()); if(mysql_num_rows($res) != 0): while($row = mysql_fetch_assoc($res)) { echo '<h1><a href="index.php?id=' . $row['id'] . '">' . $row['title'] . '</a></h1>'; echo '<p>' . "By: " . '<font color="orange">' . $row['author'] . '</font>' . ", Posted on: " . $row['date'] . '<p>'; echo '<p>' . $row['body'] . '</p><br />'; } else: echo '<p>We are really very sorry, this page does not exist!</p>'; endif; } Any suggestions how to do this? And can we do this by using .htaccess?

    Read the article

  • How to set individual NTFS partitions permissions behaviour for each user account?

    - by ryniek
    I have two NTFS partitions (DOWNLOADS for downloaded files and VM for my VirtualBox .vdi file) for which i must have full permissions for my allday use account. They should be also automounting when i login to this account. But i've also set Guest account for guests. For Guest account, i want make VM partition fully disabled and invisible (and thus it mustn't automount) but DOWNLOADS partition should be shared with limited privileges with Guest account. Editing fstab i'm able to share DOWNLOADS partition with Guest on limited privileges but VM can be only set to limited and have disabled automounting - so guest can't mount it but it still can be seen in Nautilus, plus I must always mount it manually when i login to allday account. Is there some trick to make what i want? Here's my fstab config: # /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 #Entry for /dev/sda1 : UUID=35e66658-5ee9-40cf-bf56-8204959e3df0 / xfs defaults 01 #Entry for /dev/sda2 : UUID=26c714cf-4236-45e7-9c46-cfcf91a215ae /home xfs defaults 02 #Entry for /dev/sda5 : UUID=1315BCB027C44639 /media/DOWNLOADS ntfs-3g auto,uid=1000,gid=1000,umask=0022,nodev,locale=pl_PL.utf8 0 0 #Entry for /dev/sda6 : UUID=60FF39EB72B72264 /media/VM ntfs noauto,uid=1000,gid=1000,umask=0077,nodev,locale=pl_PL.utf8 0 0 #Entry for /dev/sda7 : UUID=c52411f5-105c-45d1-971f-412f962c350e none swap sw 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 Thanks

    Read the article

  • What is meant by "no password set" for root account (and otthers)?

    - by MMA
    Several years back, we were more accustomed to changing to the root account using the su command. First, we switched to the root account, and then executed those root commands. Now we are more accustomed to using the sudo command. But we know that the root account is there. We can readily find the home directory of user root. $ ls -ld /root/ drwx------ 18 root root 4096 Oct 22 17:21 /root/ Now my point is, it is stated that "the root password in Ubuntu is left unset". Please see the answers to this question. Most of the answers have something to this effect in the first paragraph. One or two answers further state that "the account is left disabled". Now my (primary) questions are, What is meant by an unset password? Is it blank? Is it null? Or something else more cryptic? How does the account becomes enabled once I set password for it? (sudo password root) In order get a better understanding, I checked the /etc/shadow file. Since I have already set a password for the root account, I can no longer see what is there (encrypted password). So, I created another account and left it disabled. The corresponding entry in the /etc/shadow file is, testpassword:!:16020:0:99999:7::: Now perhaps my above queries need to be changed to, what does an ! in password field mean? Other encrypted passwords are those very long cryptic strings. How come this encrypted form is only one character long? And does an account become disabled if I put an ! in the (encrypted) password field?

    Read the article

  • Why won't Webmaster Tools let me set a preferred domain? (says to verify, but it should already be)

    - by Su'
    I've got a domain that I apparently forgot to set a preferred domain for, so I just tried to do it. Webmaster Tools instead popped up a little box: Part of the process of setting a preferred domain is to verify that you own http://www.example.com/. Please verify http://www.example.com/. I'm running into some problems following these instructions: As far as I can tell I already did verify sometime in the past. There's a TXT DNS record with the gibberish Google tells you to set for it that I couldn't have come up with myself. …and nothing is telling me this information is bad. So let's assume the site is somehow not actually verified. All the various methods' instructions start with "click the Manage Site button next to the site you want, and then click Verify this site." That button doesn't even exist on my screens. (It presumably goes away when you successfully verify?) Those instructions were all updated pretty recently, and the DNS one in particular just a couple weeks ago so it seems a bit unlikely they're inaccurate. I am not using Apps, and won't be, so can't try out the verification through there. Note that I also have another domain that I have not done any verification for which is showing the same behavior(no such button, being told to verify when it seems impossible etc.) so something appears to just be broken. I already have a no-www process in place server-side, so we can skip that. I'm just trying to get the box checked off in GWT. If I don't get any resolution, I'll eventually scrap the TXT record and see if the site gets un-verified(or whatever since it thinks it isn't), and see if I can just restart the process. It's not urgent so I'm just trying to figure out if I've gone blind to something or what. Did the button move?

    Read the article

  • How can I Include Multiples Tables in my linq to entities eager loading using mvc4 C#

    - by EBENEZER CURVELLO
    I have 6 classes and I try to use linq to Entities to get the SiglaUF information of the last deeper table (in the view - MVC). The problem is I receive the following error: "The ObjectContext instance has been disposed and can no longer be used for operations that require a connection." The view is like that: > @model IEnumerable<DiskPizzaDelivery.Models.EnderecoCliente> > @foreach (var item in Model) { > @Html.DisplayFor(modelItem => item.CEP.Cidade.UF.SiglaUF) > } The query that i use: var cliente = context.Clientes .Include(e => e.Enderecos) .Include(e1 => e1.Enderecos.Select(cep => cep.CEP)) .SingleOrDefault(); The question is: How Can I improve this query to pre loading (eager loading) "Cidade" and "UF"? See below the classes: public partial class Cliente { [Key] [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] public int IdCliente { get; set; } public string Email { get; set; } public string Senha { get; set; } public virtual ICollection<EnderecoCliente> Enderecos { get; set; } } public partial class EnderecoCliente { public int IdEndereco { get; set; } public int IdCliente { get; set; } public string CEPEndereco { get; set; } public string Numero { get; set; } public string Complemento { get; set; } public string PontoReferencia { get; set; } public virtual Cliente Cliente { get; set; } public virtual CEP CEP { get; set; } } public partial class CEP { public string CodCep { get; set; } public string Tipo_Logradouro { get; set; } public string Logradouro { get; set; } public string Bairro { get; set; } public int CodigoUF { get; set; } public int CodigoCidade { get; set; } public virtual Cidade Cidade { get; set; } } public partial class Cidade { public int CodigoCidade { get; set; } public string NomeCidade { get; set; } public int CodigoUF { get; set; } public virtual ICollection<CEP> CEPs { get; set; } public virtual UF UF { get; set; } public virtual ICollection<UF> UFs { get; set; } } public partial class UF { public int CodigoUF { get; set; } public string SiglaUF { get; set; } public string NomeUF { get; set; } public int CodigoCidadeCapital { get; set; } public virtual ICollection<Cidade> Cidades { get; set; } public virtual Cidade Cidade { get; set; } } var cliente = context.Clientes .Where(c => c.Email == email) .Where(c => c.Senha == senha) .Include(e => e.Enderecos) .Include(e1 => e1.Enderecos.Select(cep => cep.CEP)) .SingleOrDefault(); Thanks!

    Read the article

  • How can I set my screen resolution to match my TV?

    - by Scott Severance
    I have a computer in my classroom that's connected to an LG smart TV (that's actually not so smart. I wouldn't recommend buying one.). For the touch interface, the TV wants a resolution of 1920x1080 at 60Hz. However, I can't seem to set the computer to that resolution. The display settings only offer 1024x768 and 640x480. The computer dual boots with Windows XP, where widescreen options are available in approximately the required size, but the exact resolution -- or even aspect ratio-- isn't available in XP either. I tried the following command: xrandr -s 1920x1080 -r 60 The response was: Size 1920x1080 not found in available modes Back in the old days, the solution would be to edit xorg.conf. However, since that file no longer exists, and I haven't found up-to-date info, I don't know what else to do. If it helps, this machine will never be connected to a different display, so resolution flexibility isn't important. Here's the output of lshw: *-display:0 description: VGA compatible controller product: 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 03 width: 64 bits clock: 33MHz capabilities: vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:42 memory:fe800000-febfffff memory:d0000000-dfffffff ioport:ecd8(size=8) *-display:1 UNCLAIMED description: Display controller product: 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 03 width: 64 bits clock: 33MHz According to the system settings, my graphics driver is unknown and my "experience" is standard. This is 64-bit Ubuntu 12.04 (Precise) Note: There are a number of similar questions to this one, but they didn't include any answers that helped me. Update After posting this question, I noticed one in the sidebar that I hadn't found through search but which appeared to contain the answer. Based on that question, I created the /etc/X11/xorg.conf file below: Section "ServerLayout" Identifier "X.org Configured" Screen 0 "Screen0" 0 0 InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" EndSection Section "Files" ModulePath "/usr/lib/xorg/modules" FontPath "/usr/share/fonts/X11/misc" FontPath "/var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType" FontPath "built-ins" EndSection Section "Module" Load "glx" Load "dri2" Load "dbe" Load "dri" Load "record" Load "extmod" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "ZAxisMapping" "4 5 6 7" EndSection Section "Monitor" Identifier "Monitor0" VendorName "LG" ModelName "Smart TV" EndSection Section "Device" ### Available Driver options are:- ### Values: <i>: integer, <f>: float, <bool>: "True"/"False", ### <string>: "String", <freq>: "<f> Hz/kHz/MHz", ### <percent>: "<f>%" ### [arg]: arg optional #Option "DRI" # [<bool>] #Option "ColorKey" # <i> #Option "VideoKey" # <i> #Option "FallbackDebug" # [<bool>] #Option "Tiling" # [<bool>] #Option "LinearFramebuffer" # [<bool>] #Option "Shadow" # [<bool>] #Option "SwapbuffersWait" # [<bool>] #Option "TripleBuffer" # [<bool>] #Option "XvMC" # [<bool>] #Option "XvPreferOverlay" # [<bool>] #Option "DebugFlushBatches" # [<bool>] #Option "DebugFlushCaches" # [<bool>] #Option "DebugWait" # [<bool>] #Option "HotPlug" # [<bool>] #Option "RelaxedFencing" # [<bool>] Identifier "Card0" Driver "intel" BusID "PCI:0:2:0" EndSection Section "Screen" Identifier "Screen0" Device "Card0" Monitor "Monitor0" DefaultDepth 24 #SubSection "Display" # Viewport 0 0 # Depth 1 #EndSubSection #SubSection "Display" # Viewport 0 0 # Depth 4 #EndSubSection #SubSection "Display" # Viewport 0 0 # Depth 8 #EndSubSection #SubSection "Display" # Viewport 0 0 # Depth 15 #EndSubSection #SubSection "Display" # Viewport 0 0 # Depth 16 #EndSubSection SubSection "Display" Viewport 0 0 Depth 24 Modes "1024x768" "1920x1080" EndSubSection EndSection According to /var/log/Xorg.0.log, my settings aren't being applied. In fact, I wonder if the config file is even being read. [ 1209.083] (**) intel(0): Depth 24, (--) framebuffer bpp 32 [ 1209.084] (==) intel(0): RGB weight 888 [ 1209.084] (==) intel(0): Default visual is TrueColor [ 1209.084] (II) intel(0): Integrated Graphics Chipset: Intel(R) G41 [ 1209.084] (--) intel(0): Chipset: "G41" [ 1209.084] (**) intel(0): Relaxed fencing enabled [ 1209.084] (**) intel(0): Wait on SwapBuffers? enabled [ 1209.084] (**) intel(0): Triple buffering? enabled [ 1209.084] (**) intel(0): Framebuffer tiled [ 1209.084] (**) intel(0): Pixmaps tiled [ 1209.084] (**) intel(0): 3D buffers tiled [ 1209.084] (**) intel(0): SwapBuffers wait enabled [ 1209.084] (==) intel(0): video overlay key set to 0x101fe [ 1209.172] (II) intel(0): Output VGA1 using monitor section Monitor0 [ 1209.260] (II) intel(0): EDID for output VGA1 [ 1209.260] (II) intel(0): Printing probed modes for output VGA1 [ 1209.260] (II) intel(0): Modeline "1024x768"x60.0 65.00 1024 1048 1184 1344 768 771 777 806 -hsync -vsync (48.4 kHz) [ 1209.260] (II) intel(0): Modeline "800x600"x60.3 40.00 800 840 968 1056 600 601 605 628 +hsync +vsync (37.9 kHz) [ 1209.260] (II) intel(0): Modeline "800x600"x56.2 36.00 800 824 896 1024 600 601 603 625 +hsync +vsync (35.2 kHz) [ 1209.260] (II) intel(0): Modeline "848x480"x60.0 33.75 848 864 976 1088 480 486 494 517 +hsync +vsync (31.0 kHz) [ 1209.260] (II) intel(0): Modeline "640x480"x59.9 25.18 640 656 752 800 480 489 492 525 -hsync -vsync (31.5 kHz) [ 1209.260] (II) intel(0): Output VGA1 connected [ 1209.260] (II) intel(0): Using user preference for initial modes [ 1209.260] (II) intel(0): Output VGA1 using initial mode 1024x768 [ 1209.260] (II) intel(0): Using default gamma of (1.0, 1.0, 1.0) unless otherwise stated. [ 1209.260] (II) intel(0): Kernel page flipping support detected, enabling [ 1209.260] (==) intel(0): DPI set to (96, 96)

    Read the article

  • Bypass cache for mobile user agents, VARNISH+NGINX+W3CACHE

    - by Mike McGhee
    Right now I'm running Wordpress w/ W3 Cache on nginx with varnish front end. I'm trying to use the WP Touch Pro plugin for wordpress to display mobile sites, but it is not working. Shows the desktop theme still. I've put the mobile user agents in the rejected user agents box in w3 cache. Here is the nginx config w3 cache spit out: BEGIN W3TC Page Cache cache location ~ /wp-content/w3tc/pgcache.*html$ { expires modified 3600s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; add_header Vary "Accept-Encoding, Cookie"; } location ~ /wp-content/w3tc/pgcache.*gzip$ { gzip off; types {} default_type text/html; expires modified 3600s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; add_header Vary "Accept-Encoding, Cookie"; add_header Content-Encoding gzip; } # END W3TC Page Cache cache # BEGIN W3TC Browser Cache gzip on; gzip_types text/css application/x-javascript text/x-component text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon; location ~ \.(css|js|htc)$ { expires 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; } location ~ \.(html|htm|rtf|rtx|svg|svgz|txt|xsd|xsl|xml)$ { expires 3600s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; } location ~ \.(asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|otf|odb|odc|odf|odg|odp|ods|odt|ogg|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|tif|tiff|ttf|ttc|wav|wma|wri|xla|xls|xlsx|xlt|xlw|zip)$ { expires 31536000s; add_header X-Powered-By "W3 Total Cache/0.9.2.4"; } # END W3TC Browser Cache # BEGIN W3TC Minify core rewrite ^/wp-content/w3tc/min/w3tc_rewrite_test$ /wp-content/w3tc/min/index.php?w3tc_rewrite_test=1 last; rewrite ^/wp-content/w3tc/min/(.+\.(css|js))$ /wp-content/w3tc/min/index.php?file=$1 last; # END W3TC Minify core # BEGIN W3TC Page Cache core rewrite ^(.*\/)?w3tc_rewrite_test$ $1?w3tc_rewrite_test=1 last; set $w3tc_rewrite 1; if ($request_method = POST) { set $w3tc_rewrite 0; } if ($query_string != "") { set $w3tc_rewrite 0; } if ($http_host != "mysite.com") { set $w3tc_rewrite 0; } set $w3tc_rewrite2 1; if ($request_uri !~ \/$) { set $w3tc_rewrite2 0; } if ($request_uri ~* "(sitemap(_index)?\.xml(\.gz)?|[a-z0-9_\-]+-sitemap([0-9]+)?\.xml(\.gz)?)") { set $w3tc_rewrite2 1; } if ($w3tc_rewrite2 != 1) { set $w3tc_rewrite 0; } set $w3tc_rewrite3 1; if ($request_uri ~* "(\/wp-admin\/|\/xmlrpc.php|\/wp-(app|cron|login|register|mail)\.php|\/feed\/|wp-.*\.php|index\.php)") { set $w3tc_rewrite3 0; } if ($request_uri ~* "(wp\-comments\-popup\.php|wp\-links\-opml\.php|wp\-locations\.php)") { set $w3tc_rewrite3 1; } if ($w3tc_rewrite3 != 1) { set $w3tc_rewrite 0; } if ($http_cookie ~* "(comment_author|wp\-postpass|wordpress_\[a\-f0\-9\]\+|wordpress_logged_in)") { set $w3tc_rewrite 0; } if ($http_user_agent ~* "(W3\ Total\ Cache/0\.9\.2\.4|iphone|ipod|ipad|aspen|incognito|webmate|android|dream|cupcake|froyo|blackberry9500|blackberry9520|blackberry9530|blackberry9550|blackberry\ 9800|blackberry\ 9780|webos|s8000|bada)") { set $w3tc_rewrite 0; } set $w3tc_ua ""; if ($http_user_agent ~* "(acer\ s100|android|archos5|blackberry9500|blackberry9530|blackberry9550|blackberry\ 9800|cupcake|docomo\ ht\-03a|dream|htc\ hero|htc\ magic|htc_dream|htc_magic|incognito|ipad|iphone|ipod|kindle|lg\-gw620|liquid\ build|maemo|mot\-mb200|mot\-mb300|nexus\ one|opera\ mini|samsung\-s8000|series60.*webkit|series60/5\.0|sonyericssone10|sonyericssonu20|sonyericssonx10|t\-mobile\ mytouch\ 3g|t\-mobile\ opal|tattoo|webmate|webos)") { set $w3tc_ua _high; } set $w3tc_ref ""; set $w3tc_ssl ""; set $w3tc_enc ""; if ($http_accept_encoding ~ gzip) { set $w3tc_enc _gzip; } set $w3tc_ext ""; if (-f "$document_root/wp-content/w3tc/pgcache/$request_uri/_index$w3tc_ua$w3tc_ref$w3tc_ssl.html$w3tc_enc") { set $w3tc_ext .html; } if ($w3tc_ext = "") { set $w3tc_rewrite 0; } if ($w3tc_rewrite = 1) { rewrite .* "/wp- content/w3tc/pgcache/$request_uri/_index$w3tc_ua$w3tc_ref$w3tc_ssl$w3tc_ext$w3tc_enc" last; } # END W3TC Page Cache core And here is what I have in my varnish vcl.. sub vcl_recv { # Add a unique header containing the client address remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Device detection set req.http.X-Device = "desktop"; if ( req.http.User-Agent ~ "iP(hone|od|ad)" || req.http.User-Agent ~ "Android" ) { set req.http.X-Device = "smart"; } elseif ( req.http.User-Agent ~ "(SymbianOS|BlackBerry|SonyEricsson|Nokia|SAMSUNG|^LG)" ) { set req.http.X-Device = "cell"; } Any help is greatly appreciated, I've been banging my head against this for 2 days..

    Read the article

  • Automating Solaris 11 Zones Installation Using The Automated Install Server

    - by Orgad Kimchi
    Introduction How to use the Oracle Solaris 11 Automated install server in order to automate the Solaris 11 Zones installation. In this document I will demonstrate how to setup the Automated Install server in order to provide hands off installation process for the Global Zone and two Non Global Zones located on the same system. Architecture layout: Figure 1. Architecture layout Prerequisite Setup the Automated install server (AI) using the following instructions “How to Set Up Automated Installation Services for Oracle Solaris 11” The first step in this setup will be creating two Solaris 11 Zones configuration files. Step 1: Create the Solaris 11 Zones configuration files  The Solaris Zones configuration files should be in the format of the zonecfg export command. # zonecfg -z zone1 export > /var/tmp/zone1# cat /var/tmp/zone1 create -b set brand=solaris set zonepath=/rpool/zones/zone1 set autoboot=true set ip-type=exclusive add anet set linkname=net0 set lower-link=auto set configure-allowed-address=true set link-protection=mac-nospoof set mac-address=random end  Create a backup copy of this file under a different name, for example, zone2. # cp /var/tmp/zone1 /var/tmp/zone2 Modify the second configuration file with the zone2 configuration information You should change the zonepath for example: set zonepath=/rpool/zones/zone2 Step2: Copy and share the Zones configuration files  Create the NFS directory for the Zones configuration files # mkdir /export/zone_config Share the directory for the Zones configuration file # share –o ro /export/zone_config Copy the Zones configuration files into the NFS shared directory # cp /var/tmp/zone1 /var/tmp/zone2  /export/zone_config Verify that the NFS share has been created using the following command # share export_zone_config      /export/zone_config     nfs     sec=sys,ro Step 3: Add the Global Zone as client to the Install Service Use the installadm create-client command to associate client (Global Zone) with the install service To find the MAC address of a system, use the dladm command as described in the dladm(1M) man page. The following command adds the client (Global Zone) with MAC address 0:14:4f:2:a:19 to the s11x86service install service. # installadm create-client -e “0:14:4f:2:a:19" -n s11x86service You can verify the client creation using the following command # installadm list –c Service Name  Client Address     Arch   Image Path ------------  --------------     ----   ---------- s11x86service 00:14:4F:02:0A:19  i386   /export/auto_install/s11x86service We can see the client install service name (s11x86service), MAC address (00:14:4F:02:0A:19 and Architecture (i386). Step 4: Global Zone manifest setup  First, get a list of the installation services and the manifests associated with them: # installadm list -m Service Name   Manifest        Status ------------   --------        ------ default-i386   orig_default   Default s11x86service  orig_default   Default Then probe the s11x86service and the default manifest associated with it. The -m switch reflects the name of the manifest associated with a service. Since we want to capture that output into a file, we redirect the output of the command as follows: # installadm export -n s11x86service -m orig_default >  /var/tmp/orig_default.xml Create a backup copy of this file under a different name, for example, orig-default2.xml, and edit the copy. # cp /var/tmp/orig_default.xml /var/tmp/orig_default2.xml Use the configuration element in the AI manifest for the client system to specify non-global zones. Use the name attribute of the configuration element to specify the name of the zone. Use the source attribute to specify the location of the config file for the zone.The source location can be any http:// or file:// location that the client can access during installation. The following sample AI manifest specifies two Non-Global Zones: zone1 and zone2 You should replace the server_ip with the ip address of the NFS server. <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>   <ai_instance>     <target>       <logical>         <zpool name="rpool" is_root="true">           <filesystem name="export" mountpoint="/export"/>           <filesystem name="export/home"/>           <be name="solaris"/>         </zpool>       </logical>     </target>     <software type="IPS">       <source>         <publisher name="solaris">           <origin name="http://pkg.oracle.com/solaris/release"/>         </publisher>       </source>       <software_data action="install">         <name>pkg:/entire@latest</name>         <name>pkg:/group/system/solaris-large-server</name>       </software_data>     </software>     <configuration type="zone" name="zone1" source="file:///net/server_ip/export/zone_config/zone1"/>     <configuration type="zone" name="zone2" source="file:///net/server_ip/export/zone_config/zone2"/>   </ai_instance> </auto_install> The following example adds the /var/tmp/orig_default2.xml AI manifest to the s11x86service install service # installadm create-manifest -n s11x86service -f /var/tmp/orig_default2.xml -m gzmanifest You can verify the manifest creation using the following command # installadm list -n s11x86service  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    orig_default        Default  None    gzmanifest          Inactive None We can see from the command output that the new manifest named gzmanifest has been created and associated with the s11x86service install service. Step 5: Non Global Zone manifest setup The AI manifest for non-global zone installation is similar to the AI manifest for installing the global zone. If you do not provide a custom AI manifest for a non-global zone, the default AI manifest for Zones is used The default AI manifest for Zones is available at /usr/share/auto_install/manifest/zone_default.xml. In this example we should use the default AI manifest for zones The following sample default AI manifest for zones # cat /usr/share/auto_install/manifest/zone_default.xml <?xml version="1.0" encoding="UTF-8"?> <!--  Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved. --> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>     <ai_instance name="zone_default">         <target>             <logical>                 <zpool name="rpool">                     <!--                       Subsequent <filesystem> entries instruct an installer                       to create following ZFS datasets:                           <root_pool>/export         (mounted on /export)                           <root_pool>/export/home    (mounted on /export/home)                       Those datasets are part of standard environment                       and should be always created.                       In rare cases, if there is a need to deploy a zone                       without these datasets, either comment out or remove                       <filesystem> entries. In such scenario, it has to be also                       assured that in case of non-interactive post-install                       configuration, creation of initial user account is                       disabled in related system configuration profile.                       Otherwise the installed zone would fail to boot.                     -->                     <filesystem name="export" mountpoint="/export"/>                     <filesystem name="export/home"/>                     <be name="solaris">                         <options>                             <option name="compression" value="on"/>                         </options>                     </be>                 </zpool>             </logical>         </target>         <software type="IPS">             <destination>                              </destination>             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data>         </software>     </ai_instance> </auto_install> (optional) We can customize the default AI manifest for Zones Create a backup copy of this file under a different name, for example, zone_default2.xml and edit the copy # cp /usr/share/auto_install/manifest/zone_default.xml /var/tmp/zone_default2.xml Edit the copy (/var/tmp/zone_default2.xml) The following example adds the /var/tmp/zone_default2.xml AI manifest to the s11x86service install service and specifies that zone1 and zone2 should use this manifest. # installadm create-manifest -n s11x86service -f /var/tmp/zone_default2.xml -m zones_manifest -c zonename="zone1 zone2" Note: Do not use the following elements or attributes in a non-global zone AI manifest:     The auto_reboot attribute of the ai_instance element     The http_proxy attribute of the ai_instance element     The disk child element of the target element     The noswap attribute of the logical element     The nodump attribute of the logical element     The configuration element Step 6: Global Zone profile setup We are going to create a global zone configuration profile which includes the host information for example: host name, ip address name services etc… # sysconfig create-profile –o /var/tmp/gz_profile.xml You need to provide the host information for example:     Default router     Root password     DNS information The output should eventually disappear and be replaced by the initial screen of the System Configuration Tool (see Figure 2), where you can do the final configuration. Figure 2. Profile creation menu You can validate the profile using the following command # installadm validate -n s11x86service –P /var/tmp/gz_profile.xml Validating static profile gz_profile.xml...  Passed Next, instantiate a profile with the install service. In our case, use the following syntax for doing this # installadm create-profile -n s11x86service  -f /var/tmp/gz_profile.xml -p  gz_profile You can verify profile creation using the following command # installadm list –n s11x86service  -p Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         None We can see that the gz_profie has been created and associated with the s11x86service Install service. Step 7: Setup the Solaris Zones configuration profiles The step should be similar to the Global zone profile creation on step 6 # sysconfig create-profile –o /var/tmp/zone1_profile.xml # sysconfig create-profile –o /var/tmp/zone2_profile.xml You can validate the profiles using the following command # installadm validate -n s11x86service -P /var/tmp/zone1_profile.xml Validating static profile zone1_profile.xml...  Passed # installadm validate -n s11x86service -P /var/tmp/zone2_profile.xml Validating static profile zone2_profile.xml...  Passed Next, associate the profiles with the install service The following example adds the zone1_profile.xml configuration profile to the s11x86service  install service and specifies that zone1 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone1_profile.xml -p zone1_profile -c zonename=zone1 The following example adds the zone2_profile.xml configuration profile to the s11x86service  install service and specifies that zone2 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone2_profile.xml -p zone2_profile -c zonename=zone2 You can verify the profiles creation using the following command # installadm list -n s11x86service -p Service/Profile Name  Criteria --------------------  -------- s11x86service    zone1_profile      zonename = zone1    zone2_profile      zonename = zone2    gz_profile         None We can see that we have three profiles in the s11x86service  install service     Global Zone  gz_profile     zone1            zone1_profile     zone2            zone2_profile. Step 8: Global Zone setup Associate the global zone client with the manifest and the profile that we create in the previous steps The following example adds the manifest and profile to the client (global zone), where: gzmanifest  is the name of the manifest. gz_profile  is the name of the configuration profile. mac="0:14:4f:2:a:19" is the client (global zone) mac address s11x86service is the install service name. # installadm set-criteria -m  gzmanifest  –p  gz_profile  -c mac="0:14:4f:2:a:19" -n s11x86service You can verify the manifest and profile association using the following command # installadm list -n s11x86service -p  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    gzmanifest                   mac  = 00:14:4F:02:0A:19    orig_default        Default  None Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         mac      = 00:14:4F:02:0A:19    zone2_profile      zonename = zone2    zone1_profile      zonename = zone1 Step 9: Provision the host with the Non-Global Zones The next step is to boot the client system off the network and provision it using the Automated Install service that we just set up. First, boot the client system. Figure 3 shows the network boot attempt (when done on an x86 system): Figure 3. Network Boot Then you will be prompted by a GRUB menu, with a timer, as shown in Figure 4. The default selection (the "Text Installer and command line" option) is highlighted.  Press the down arrow to highlight the second option labeled Automated Install, and then press Enter. The reason we need to do this is because we want to prevent a system from being automatically re-installed if it were to be booted from the network accidentally. Figure 4. GRUB Menu What follows is the continuation of a networked boot from the Automated Install server,. The client downloads a mini-root (a small set of files in which to successfully run the installer), identifies the location of the Automated Install manifest on the network, retrieves that manifest, and then processes it to identify the address of the IPS repository from which to obtain the desired software payload. Non-Global Zones are installed and configured on the first reboot after the Global Zone is installed. You can list all the Solaris Zones status using the following command # zoneadm list -civ Once the Zones are in running state you can login into the Zone using the following command # zlogin –z zone1 Troubleshooting Automated Installations If an installation to a client system failed, you can find the client log at /system/volatile/install_log. NOTE: Zones are not installed if any of the following errors occurs:     A zone config file is not syntactically correct.     A collision exists among zone names, zone paths, or delegated ZFS datasets in the set of zones to be installed     Required datasets are not configured in the global zone. For more troubleshooting information see “Installing Oracle Solaris 11 Systems” Conclusion This paper demonstrated the benefits of using the Automated Install server to simplify the Non Global Zones setup, including the creation and configuration of the global zone manifest and the Solaris Zones profiles.

    Read the article

  • How do I ignore set and get when assigning or retrieving a value?

    - by cbsch
    Hello. Is it possible to ignore set and get when I'm assigning to or retrieving a value? In specific, I'm inheriting from a class that has a property declared like this: public Int32 Value { get; set; } What I'd like to do is to override it and do something useful in those set and get's. The problem appears when I override it, I also have to manually assign, or return the value from the property. If I do something like this: override public Int32 Value { get { return this.Value; } set { this.Value = value; // do something useful } Then I'm creating an infinite loop. Is there a way to set or get the value without invoking the code in set and get, or do I have to make a separate name for the actual variable?

    Read the article

  • Dijit.Dialog 1.4, setting size is limited to 600x400 no matter what size I set it.

    - by John Evans
    I'm trying to set the size of a dijit.Dialog, but it seems limited to 600x400, no matter what size I set it. I've copied the code from dojocampus and the dialog appear, but when i set the size larger, it only shows 600x400. Using firebug and selecting items inside the dialog, I see that they are larger than the dialog, but don't show correctly. I set it up to scroll, but the bottom of the scrollbar is out of view. In firebug, I've check the maxSize from _Widget and it is set to infinity. Here is my code to set the dialog. <div id="sized" dojoType="dijit.Dialog" title="My scrolling dialog"> <div style="width: 580px; height: 600px; overflow: scroll;"> Any suggestions for sizing the dialog box larger?

    Read the article

  • iphone SDK: How to check user set up a valid IP , Netmask , Router ???

    - by WebberLai
    It is simple to understand after this pic. I add fewtextfields in tableview I already set the keyboard style is number pad. Now the problems is 1.Do I need to create 12 textfields ???ex. UITextField *ip1,ip2,ip3,ip4....or just set different tag for textfield ? 2.How to check user enter wrong char not 3 valid numbers (even the keyboard is set number pad,but it might can paste words...) 3.How to check user did follow this setting rules ??? <1If IP1 is 0~223 , IP2 need to be set 0~255,IP3,IP4 both are 0~255 <2If IP1 is 172 ,IP2 is set 16~31.IP3,IP4 both are 0~255; If IP1 is 192 ,IP2 must be 168.IP3 IP4 are 0~255 <3Netmask set default 255.255.255.0 <4Router 0~223 ,0~255 ,0~255 ,0~255 This IP setting rules is my friends tech me this ...I not sure the rules is right or not?

    Read the article

  • How to set order of appearance for fields when using Html.EditorFor in MVC 2?

    - by Anrie
    I have the following classes in my Model: public abstract class Entity : IEntity { [ScaffoldColumn(false)] public int Id { get; set; } [Required,StringLength(500)] public string Name { get; set; } } and public class Model : SortableEntity { [Required] public ModelType Type { get; set; } [ListRequired] public List<Producer> Producers { get; set; } public List<PrintArea> PrintAreas { get; set; } public List<Color> Colors { get; set; } } To display the "Model" class in the view I simply call Html.EditorFor(model=model), but the "Name" property of the base class is rendered last, which is not the desired behaviour. Is it possible to influenece on the order of displayed fields somehow?

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • How do I set up MVP for a Winforms solution?

    - by JonWillis
    Question moved from Stackoverflow - http://stackoverflow.com/questions/4971048/how-do-i-set-up-mvp-for-a-winforms-solution I have used MVP and MVC in the past, and I prefer MVP as it controls the flow of execution so much better in my opinion. I have created my infrastructure (datastore/repository classes) and use them without issue when hard coding sample data, so now I am moving onto the GUI and preparing my MVP. Section A I have seen MVP using the view as the entry point, that is in the views constructor method it creates the presenter, which in turn creates the model, wiring up events as needed. I have also seen the presenter as the entry point, where a view, model and presenter are created, this presenter is then given a view and model object in its constructor to wire up the events. As in 2, but the model is not passed to the presenter. Instead the model is a static class where methods are called and responses returned directly. Section B In terms of keeping the view and model in sync I have seen. Whenever a value in the view in changed, i.e. TextChanged event in .Net/C#. This fires a DataChangedEvent which is passed through into the model, to keep it in sync at all times. And where the model changes, i.e. a background event it listens to, then the view is updated via the same idea of raising a DataChangedEvent. When a user wants to commit changes a SaveEvent it fires, passing through into the model to make the save. In this case the model mimics the view's data and processes actions. Similar to #b1, however the view does not sync with the model all the time. Instead when the user wants to commit changes, SaveEvent is fired and the presenter grabs the latest details and passes them into the model. in this case the model does not know about the views data until it is required to act upon it, in which case it is passed all the needed details. Section C Displaying of business objects in the view, i.e. a object (MyClass) not primitive data (int, double) The view has property fields for all its data that it will display as domain/business objects. Such as view.Animals exposes a IEnumerable<IAnimal> property, even though the view processes these into Nodes in a TreeView. Then for the selected animal it would expose SelectedAnimal as IAnimal property. The view has no knowledge of domain objects, it exposes property for primitive/framework (.Net/Java) included objects types only. In this instance the presenter will pass an adapter object the domain object, the adapter will then translate a given business object into the controls visible on the view. In this instance the adapter must have access to the actual controls on the view, not just any view so becomes more tightly coupled. Section D Multiple views used to create a single control. i.e. You have a complex view with a simple model like saving objects of different types. You could have a menu system at the side with each click on an item the appropriate controls are shown. You create one huge view, that contains all of the individual controls which are exposed via the views interface. You have several views. You have one view for the menu and a blank panel. This view creates the other views required but does not display them (visible = false), this view also implements the interface for each view it contains (i.e. child views) so it can expose to one presenter. The blank panel is filled with other views (Controls.Add(myview)) and ((myview.visible = true). The events raised in these "child"-views are handled by the parent view which in turn pass the event to the presenter, and visa versa for supplying events back down to child elements. Each view, be it the main parent or smaller child views are each wired into there own presenter and model. You can literately just drop a view control into an existing form and it will have the functionality ready, just needs wiring into a presenter behind the scenes. Section E Should everything have an interface, now based on how the MVP is done in the above examples will affect this answer as they might not be cross-compatible. Everything has an interface, the View, Presenter and Model. Each of these then obviously has a concrete implementation. Even if you only have one concrete view, model and presenter. The View and Model have an interface. This allows the views and models to differ. The presenter creates/is given view and model objects and it just serves to pass messages between them. Only the View has an interface. The Model has static methods and is not created, thus no need for an interface. If you want a different model, the presenter calls a different set of static class methods. Being static the Model has no link to the presenter. Personal thoughts From all the different variations I have presented (most I have probably used in some form) of which I am sure there are more. I prefer A3 as keeping business logic reusable outside just MVP, B2 for less data duplication and less events being fired. C1 for not adding in another class, sure it puts a small amount of non unit testable logic into a view (how a domain object is visualised) but this could be code reviewed, or simply viewed in the application. If the logic was complex I would agree to an adapter class but not in all cases. For section D, i feel D1 creates a view that is too big atleast for a menu example. I have used D2 and D3 before. Problem with D2 is you end up having to write lots of code to route events to and from the presenter to the correct child view, and its not drag/drop compatible, each new control needs more wiring in to support the single presenter. D3 is my prefered choice but adds in yet more classes as presenters and models to deal with the view, even if the view happens to be very simple or has no need to be reused. i think a mixture of D2 and D3 is best based on circumstances. As to section E, I think everything having an interface could be overkill I already do it for domain/business objects and often see no advantage in the "design" by doing so, but it does help in mocking objects in tests. Personally I would see E2 as a classic solution, although have seen E3 used in 2 projects I have worked on previously. Question Am I implementing MVP correctly? Is there a right way of going about it? I've read Martin Fowler's work that has variations, and I remember when I first started doing MVC, I understood the concept, but could not originally work out where is the entry point, everything has its own function but what controls and creates the original set of MVC objects.

    Read the article

  • How can I set a Windows user environment variable that takes effect for the current session?

    - by Graham Powell
    I am trying to set a Windows user environment variable and then launch an application via either batch file or a script. However, the environment variable is not set to the appropriate value until after the user logs off and logs back on. (I think a more accurate description would be that the new value is not available to the app until after the next logon.) Is there any way to set a variable in the user's environment so that it's immediately available? I'm doing this because this program's functionality can be controlled by environment variables, and users will need different functionality at different times. Because of license constraints I need to set this dynamically, if possible. Thanks, Graham

    Read the article

  • How to make Horde connect to mysql with UTF-8 character set?

    - by jkj
    How to tell horde 3.3.11 to use UTF-8 for it's mysql connection? The $conf['sql']['charset'] only tells horde what is expected from the database. Horde uses MDB2 to connect to mysql. Is there way to force MDB2 or mysql character_set_client from php.ini? So far I found two workarounds: Force mysql to ignore character set requested by client [mysqld] skip-character-set-client-handshake=1 default-character-set=utf8 Force mysql to run SET NAMES utf8 on every connection [mysqld] init-connect='SET NAMES utf8' Both have drawbacks on multi user mysql server. The first disables converting character sets alltogether and the second one forces every connection to produce UTF-8. [EDIT] Found the problem. The 'charset' parameter was unset the last minute before sending to SQL backend. This is probably due to mysql not being able to digest utf-8 but utf8. Mysql specific mapping is required to make it work. I just worked around it by translating utf-8 - utf8. Won't work with any other databases with this patch though. --- lib/Horde/Share/sql.php.orig 2011-07-04 17:09:33.349334890 +0300 +++ lib/Horde/Share/sql.php 2011-07-04 17:11:06.238636462 +0300 @@ -753,7 +753,13 @@ /* Connect to the sql server using the supplied parameters. */ require_once 'MDB2.php'; $params = $this->_params; - unset($params['charset']); + + if ($params['charset'] == 'utf-8') { + $params['charset'] = 'utf8'; + } else { + unset($params['charset']); + } + $this->_write_db = &MDB2::factory($params); if (is_a($this->_write_db, 'PEAR_Error')) { Horde::fatal($this->_write_db, __FILE__, __LINE__); @@ -792,7 +798,13 @@ /* Check if we need to set up the read DB connection seperately. */ if (!empty($this->_params['splitread'])) { $params = array_merge($params, $this->_params['read']); - unset($params['charset']); + + if ($params['charset'] == 'utf-8') { + $params['charset'] = 'utf8'; + } else { + unset($params['charset']); + } + $this->_db = &MDB2::singleton($params); if (is_a($this->_db, 'PEAR_Error')) { Horde::fatal($this->_db, __FILE__, __LINE__);

    Read the article

  • Index a set of files to search their text quickly?

    - by Ricket
    I have a unique need: I am frequently searching a large set of text files for a keyword. Right now, I open up Notepad++ and use the "Find in files" feature. It works just fine, but with the amount of files, each search takes several minutes to complete. Is there a good program more suited for this purpose, perhaps that indexes a set of files and then lets you search the set repeatedly and very quickly? It would greatly speed up my workflow.

    Read the article

  • How do I set up anonymous email forwarder using cPanel?

    - by Gravitas
    Some companies demand your email address, then send you spam. I'm quite familiar with cPanel. How would I set up an anonymous email forwarder, so I can give them a valid email address, and kill that email address if the company turns into an evil spammer? Note that to be effective, it would have to filter out any email addresses listed in the body of the forwarded email (otherwise those email addresses will end up on their spam list too).

    Read the article

  • How do I set up anonymous email forwarder using cPanel?

    - by Gravitas
    Hi, Some companies demand your email address, then send you spam. I'm quite familiar with cPanel. How would I set up an anonymous email forwarder, so I can give them a valid email address, and kill that email address if the company turns into an evil spammer? Note that to be effective, it would have to filter out any email addresses listed in the body of the forwarded email (otherwise those email addresses will end up on their spam list too).

    Read the article

  • Is it possible to set Transmission to show list of files as collapsed by default?

    - by Rasmus
    When downloading a torrent with a large folder hierarchy, I often find myself losing overview of the content of the torrent as Transmission shows the entire folder structure expanded under Properties > Files. Most annoyingly, if one collapses the tree and closes the dialog, the tree will be expanded again if one re-access it. Is it somehow possible to set Transmission to show torrent content as collapsed by default?

    Read the article

  • How to set multiple nameservers in /etc/resolv.conf which sticks on reboot?

    - by chrone
    Ubuntu 14.04 Server edition only displays "nameserver 127.0.0.1" in "/etc/resolv.conf" on each reboot if the dns-nameservers in "/etc/network/interfaces" contains 127.0.0.1 and some other DNS such as Google Public DNS. On /etc/network/interfaces I set as follows: dns-nameservers 127.0.0.1 8.8.8.8 But after a reboot, /etc/resolv.conf gives me this: nameserver 127.0.0.1 Shouldn't the "nameserver 8.8.8.8" listed in the /etc/resolv.conf too? Thanks in advanced.

    Read the article

  • How do I set up a .emacs initialization file? [migrated]

    - by Harry Weston
    I am using Linux Fedora 13 (Constantine) and emacs 23.1.1. I am trying to set up a .emacs file for initialization, by using emacs itself to edit and save a file .emacs in my home directory. However, although the file is there, emacs does not seem to recognize it. What might I be doing wrong, and is there a simple text for .emacs that will show me if it is being used, one that will display a message on emacs start up?

    Read the article

  • How can I set the date format to my country setting?

    - by Jamina Meissner
    I am German, but I use only English software. Hence, I am also using English Ubuntu. It's not because I don't know how to install German Ubuntu. It's because I prefer to work with English software environment. However, I would like to keep date & time format in German format, just as I use a German keyboard layout in English Ubuntu. I can set the time format to 24h time. But how can I set the date format to German time format? It is irritating for me to have the day number before the time numbers: In other words, instead of "Oct 14 15:16" I want it to display "14 Okt" or (if only English language is available) "14 Oct 15:16" or "14th Oct 15:16". At least, the number of the day should be displayed before the month. In Windows, it was no problem to choose time/date/currency settings according to a chosen country. Where can I do this in Ubuntu? The best would be if I could freely enter the date/time format myself with variables (DD.MM hh.mm.ss etc). I found answers for Ubuntu 11.04, but not for Ubuntu 12.04. I am using Ubuntu 12.04, 64-bit. Keep in mind that I am a beginner. So I'd like to be able to do this via GUI, if possible. EDIT: I found the answer in a forum. Go to System Settings... and choose Language Support. There are two tabs, Language and Reginal Formats. You are by default on the Language tab. On the Language tab, click Install / Remove Languages. A window with a list of languages opens. Mark the language(s) you want to add for your time/date/currency format. Click Apply Changes. Ubuntu will now download and install the additional language files, as well as help files of other applications in this language. So don't be irritated. When Ubuntu has finished applying the changes, switch to Regional Formats tab. (Do not change the Language for menus and windows on the Language tab if you only want to change the date/time/unit format). There you can choose from the dropdown list the language for your preferred format for date/time/currency/unit. Log out and log in again to have the changes take effect.

    Read the article

  • How to get the users set of date format pattern strings? (3 replies)

    I would like to get the current user's set of date format pattern strings as listed in the Control Panel regional settings applet. For my UK English system I see the following patterns listed: Short Date: dd/MM/yyyy dd/MM/yy d/M/yy d.M.yy yyyy MM dd Long Date: dd MMMM yyyy d MMMM yyyy If I use GetDateTimeFormats (d and D) the results match the expected patterns above, but of course they're the for...

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >