Search Results

Search found 45989 results on 1840 pages for 'type parameter'.

Page 18/1840 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Google Search Parameter Question

    - by Brian
    I've been trying to determine different parameters used by Google in their search queries. In particular, the usg parameter is what is giving me troubles. Here is an example value given for it, which is from an actual Google query: usg=0_zDqudnCN52ATGjAl3tignXNtBo4%3D Does anyone know what it could be for / recognize it? I've done a bit of digging, but haven't found any confirmation as to what it could be. Here is the link that I took a look at: http://www.webmasterworld.com/google/3892573.htm

    Read the article

  • Setting up a local AI server - easy with Solaris 11

    - by Stefan Hinker
    Many things are new in Solaris 11, Autoinstall is one of them.  If, like me, you've known Jumpstart for the last 2 centuries or so, you'll have to start from scratch.  Well, almost, as the concepts are similar, and it's not all that difficult.  Just new. I wanted to have an AI server that I could use for demo purposes, on the train if need be.  That answers the question of hardware requirements: portable.  But let's start at the beginning. First, you need an OS image, of course.  In the new world of Solaris 11, it is now called a repository.  The original can be downloaded from the Solaris 11 page at Oracle.   What you want is the "Oracle Solaris 11 11/11 Repository Image", which comes in two parts that can be combined using cat.  MD5 checksums for these (and all other downloads from that page) are available closer to the top of the page. With that, building the repository is quick and simple: # zfs create -o mountpoint=/export/repo rpool/ai/repo # zfs create rpool/ai/repo/s11 # mount -o ro -F hsfs /tmp/sol-11-1111-repo-full.iso /mnt # rsync -aP /mnt/repo /export/repo/s11 # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@fcs # pkgrepo info -s /export/repo/sol11/repo PUBLISHER PACKAGES STATUS UPDATED solaris 4292 online 2012-03-12T20:47:15.378639Z That's all there's to it.  Let's make a snapshot, just to be on the safe side.  You never know when one will come in handy.  To use this repository, you could just add it as a file-based publisher: # pkg set-publisher -g file:///export/repo/sol11/repo solaris In case I'd want to access this repository through a (virtual) network, i'll now quickly activate the repository-service: # svccfg -s application/pkg/server \ setprop pkg/inst_root=/export/repo/sol11/repo # svccfg -s application/pkg/server setprop pkg/readonly=true # svcadm refresh application/pkg/server # svcadm enable application/pkg/server That's all you need - now point your browser to http://localhost/ to view your beautiful repository-server. Step 1 is done.  All of this, by the way, is nicely documented in the README file that's contained in the repository image. Of course, we already have updates to the original release.  You can find them in MOS in the Oracle Solaris 11 Support Repository Updates (SRU) Index.  You can simply add these to your existing repository or create separate repositories for each SRU.  The individual SRUs are self-sufficient and incremental - SRU4 includes all updates from SRU2 and SRU3.  With ZFS, you can also get both: A full repository with all updates and at the same time incremental ones up to each of the updates: # mount -o ro -F hsfs /tmp/sol-11-1111-sru4-05-incr-repo.iso /mnt # pkgrecv -s /mnt/repo -d /export/repo/sol11/repo '*' # umount /mnt # pkgrepo rebuild -s /export/repo/sol11/repo # zfs snapshot rpool/ai/repo/sol11@sru4 # zfs set snapdir=visible rpool/ai/repo/sol11 # svcadm restart svc:/application/pkg/server:default The normal repository is now updated to SRU4.  Thanks to the ZFS snapshots, there is also a valid repository of Solaris 11 11/11 without the update located at /export/repo/sol11/.zfs/snapshot/fcs . If you like, you can also create another repository service for each update, running on a separate port. But now lets continue with the AI server.  Just a little bit of reading in the dokumentation makes it clear that we will need to run a DHCP server for this.  Since I already have one active (for my SunRay installation) and since it's a good idea to have these kinds of services separate anyway, I decided to create this in a Zone.  So, let's create one first: # zfs create -o mountpoint=/export/install rpool/ai/install # zfs create -o mountpoint=/zones rpool/zones # zonecfg -z ai-server zonecfg:ai-server> create create: Using system default template 'SYSdefault' zonecfg:ai-server> set zonepath=/zones/ai-server zonecfg:ai-server> add dataset zonecfg:ai-server:dataset> set name=rpool/ai/install zonecfg:ai-server:dataset> set alias=install zonecfg:ai-server:dataset> end zonecfg:ai-server> commit zonecfg:ai-server> exit # zoneadm -z ai-server install # zoneadm -z ai-server boot ; zlogin -C ai-server Give it a hostname and IP address at first boot, and there's the Zone.  For a publisher for Solaris packages, it will be bound to the "System Publisher" from the Global Zone.  The /export/install filesystem, of course, is intended to be used by the AI server.  Let's configure it now: #zlogin ai-server root@ai-server:~# pkg install install/installadm root@ai-server:~# installadm create-service -n x86-fcs -a i386 \ -s pkg://solaris/install-image/[email protected],5.11-0.175.0.0.0.2.1482 \ -d /export/install/fcs -i 192.168.2.20 -c 3 With that, the core AI server is already done.  What happened here?  First, I installed the AI server software.  IPS makes that nice and easy.  If necessary, it'll also pull in the required DHCP-Server and anything else that might be missing.  Watch out for that DHCP server software.  In Solaris 11, there are two different versions.  There's the one you might know from Solaris 10 and earlier, and then there's a new one from ISC.  The latter is the one we need for AI.  The SMF service names of both are very similar.  The "old" one is "svc:/network/dhcp-server:default". The ISC-server comes with several SMF-services. We at least need "svc:/network/dhcp/server:ipv4".  The command "installadm create-service" creates the installation-service. It's called "x86-fcs", serves the "i386" architecture and gets its boot image from the repository of the system publisher, using version 5.11,5.11-0.175.0.0.0.2.1482, which is Solaris 11 11/11.  (The option "-a i386" in this example is optional, since the installserver itself runs on a x86 machine.) The boot-environment for clients is created in /export/install/fcs and the DHCP-server is configured for 3 IP-addresses starting at 192.168.2.20.  This configuration is stored in a very human readable form in /etc/inet/dhcpd4.conf.  An AI-service for SPARC systems could be created in the very same way, using "-a sparc" as the architecture option. Now we would be ready to register and install the first client.  It would be installed with the default "solaris-large-server" using the publisher "http://pkg.oracle.com/solaris/release" and would query it's configuration interactively at first boot.  This makes it very clear that an AI-server is really only a boot-server.  The true source of packets to install can be different.  Since I don't like these defaults for my demo setup, I did some extra config work for my clients. The configuration of a client is controlled by manifests and profiles.  The manifest controls which packets are installed and how the filesystems are layed out.  In that, it's very much like the old "rules.ok" file in Jumpstart.  Profiles contain additional configuration like root passwords, primary user account, IP addresses, keyboard layout etc.  Hence, profiles are very similar to the old sysid.cfg file. The easiest way to get your hands on a manifest is to ask the AI server we just created to give us it's default one.  Then modify that to our liking and give it back to the installserver to use: root@ai-server:~# mkdir -p /export/install/configs/manifests root@ai-server:~# cd /export/install/configs/manifests root@ai-server:~# installadm export -n x86-fcs -m orig_default \ -o orig_default.xml root@ai-server:~# cp orig_default.xml s11-fcs.small.local.xml root@ai-server:~# vi s11-fcs.small.local.xml root@ai-server:~# more s11-fcs.small.local.xml <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install> <ai_instance name="S11 Small fcs local"> <target> <logical> <zpool name="rpool" is_root="true"> <filesystem name="export" mountpoint="/export"/> <filesystem name="export/home"/> <be name="solaris"/> </zpool> </logical> </target> <software type="IPS"> <destination>  </destination> <source> <publisher name="solaris"> <origin name="http://192.168.2.12/"/> </publisher> </source> <!-- By default the latest build available, in the specified IPS repository, is installed. If another build is required, the build number has to be appended to the 'entire' package in the following form: <name>pkg:/[email protected]#</name> --> <software_data action="install"> <name>pkg:/[email protected],5.11-0.175.0.0.0.2.0</name> <name>pkg:/group/system/solaris-small-server</name> </software_data> </software> </ai_instance> </auto_install> root@ai-server:~# installadm create-manifest -n x86-fcs -d \ -f ./s11-fcs.small.local.xml root@ai-server:~# installadm list -m -n x86-fcs Manifest Status Criteria -------- ------ -------- S11 Small fcs local Default None orig_default Inactive None The major points in this new manifest are: Install "solaris-small-server" Install a few locales less than the default.  I'm not that fluid in French or Japanese... Use my own package service as publisher, running on IP address 192.168.2.12 Install the initial release of Solaris 11:  pkg:/[email protected],5.11-0.175.0.0.0.2.0 Using a similar approach, I'll create a default profile interactively and use it as a template for a few customized building blocks, each defining a part of the overall system configuration.  The modular approach makes it easy to configure numerous clients later on: root@ai-server:~# mkdir -p /export/install/configs/profiles root@ai-server:~# cd /export/install/configs/profiles root@ai-server:~# sysconfig create-profile -o default.xml root@ai-server:~# cp default.xml general.xml; cp default.xml mars.xml root@ai-server:~# cp default.xml user.xml root@ai-server:~# vi general.xml mars.xml user.xml root@ai-server:~# more general.xml mars.xml user.xml :::::::::::::: general.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/timezone"> <instance enabled="true" name="default"> <property_group type="application" name="timezone"> <propval type="astring" name="localtime" value="Europe/Berlin"/> </property_group> </instance> </service> <service version="1" type="service" name="system/environment"> <instance enabled="true" name="init"> <property_group type="application" name="environment"> <propval type="astring" name="LANG" value="C"/> </property_group> </instance> </service> <service version="1" type="service" name="system/keymap"> <instance enabled="true" name="default"> <property_group type="system" name="keymap"> <propval type="astring" name="layout" value="US-English"/> </property_group> </instance> </service> <service version="1" type="service" name="system/console-login"> <instance enabled="true" name="default"> <property_group type="application" name="ttymon"> <propval type="astring" name="terminal_type" value="vt100"/> </property_group> </instance> </service> <service version="1" type="service" name="network/physical"> <instance enabled="true" name="default"> <property_group type="application" name="netcfg"> <propval type="astring" name="active_ncp" value="DefaultFixed"/> </property_group> </instance> </service> <service version="1" type="service" name="system/name-service/switch"> <property_group type="application" name="config"> <propval type="astring" name="default" value="files"/> <propval type="astring" name="host" value="files dns"/> <propval type="astring" name="printer" value="user files"/> </property_group> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="system/name-service/cache"> <instance enabled="true" name="default"/> </service> <service version="1" type="service" name="network/dns/client"> <property_group type="application" name="config"> <property type="net_address" name="nameserver"> <net_address_list> <value_node value="192.168.2.1"/> </net_address_list> </property> </property_group> <instance enabled="true" name="default"/> </service> </service_bundle> :::::::::::::: mars.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="network/install"> <instance enabled="true" name="default"> <property_group type="application" name="install_ipv4_interface"> <propval type="astring" name="address_type" value="static"/> <propval type="net_address_v4" name="static_address" value="192.168.2.100/24"/> <propval type="astring" name="name" value="net0/v4"/> <propval type="net_address_v4" name="default_route" value="192.168.2.1"/> </property_group> <property_group type="application" name="install_ipv6_interface"> <propval type="astring" name="stateful" value="yes"/> <propval type="astring" name="stateless" value="yes"/> <propval type="astring" name="address_type" value="addrconf"/> <propval type="astring" name="name" value="net0/v6"/> </property_group> </instance> </service> <service version="1" type="service" name="system/identity"> <instance enabled="true" name="node"> <property_group type="application" name="config"> <propval type="astring" name="nodename" value="mars"/> </property_group> </instance> </service> </service_bundle> :::::::::::::: user.xml :::::::::::::: <!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1"> <service_bundle type="profile" name="sysconfig"> <service version="1" type="service" name="system/config-user"> <instance enabled="true" name="default"> <property_group type="application" name="root_account"> <propval type="astring" name="login" value="root"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="role"/> </property_group> <property_group type="application" name="user_account"> <propval type="astring" name="login" value="stefan"/> <propval type="astring" name="password" value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/> <propval type="astring" name="type" value="normal"/> <propval type="astring" name="description" value="Stefan Hinker"/> <propval type="count" name="uid" value="12345"/> <propval type="count" name="gid" value="10"/> <propval type="astring" name="shell" value="/usr/bin/bash"/> <propval type="astring" name="roles" value="root"/> <propval type="astring" name="profiles" value="System Administrator"/> <propval type="astring" name="sudoers" value="ALL=(ALL) ALL"/> </property_group> </instance> </service> </service_bundle> root@ai-server:~# installadm create-profile -n x86-fcs -f general.xml root@ai-server:~# installadm create-profile -n x86-fcs -f user.xml root@ai-server:~# installadm create-profile -n x86-fcs -f mars.xml \ -c ipv4=192.168.2.100 root@ai-server:~# installadm list -p Service Name Profile ------------ ------- x86-fcs general.xml mars.xml user.xml root@ai-server:~# installadm list -n x86-fcs -p Profile Criteria ------- -------- general.xml None mars.xml ipv4 = 192.168.2.100 user.xml None Here's the idea behind these files: "general.xml" contains settings valid for all my clients.  Stuff like DNS servers, for example, which in my case will always be the same. "user.xml" only contains user definitions.  That is, a root password and a primary user.Both of these profiles will be valid for all clients (for now). "mars.xml" defines network settings for an individual client.  This profile is associated with an IP-Address.  For this to work, I'll have to tweak the DHCP-settings in the next step: root@ai-server:~# installadm create-client -e 08:00:27:AA:3D:B1 -n x86-fcs root@ai-server:~# vi /etc/inet/dhcpd4.conf root@ai-server:~# tail -5 /etc/inet/dhcpd4.conf host 080027AA3DB1 { hardware ethernet 08:00:27:AA:3D:B1; fixed-address 192.168.2.100; filename "01080027AA3DB1"; } This completes the client preparations.  I manually added the IP-Address for mars to /etc/inet/dhcpd4.conf.  This is needed for the "mars.xml" profile.  Disabling arbitrary DHCP-replies will shut up this DHCP server, making my life in a shared environment a lot more peaceful ;-)Now, I of course want this installation to be completely hands-off.  For this to work, I'll need to modify the grub boot menu for this client slightly.  You can find it in /etc/netboot.  "installadm create-client" will create a new boot menu for every client, identified by the client's MAC address.  The template for this can be found in a subdirectory with the name of the install service, /etc/netboot/x86-fcs in our case.  If you don't want to change this manually for every client, modify that template to your liking instead. root@ai-server:~# cd /etc/netboot root@ai-server:~# cp menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org root@ai-server:~# vi menu.lst.01080027AA3DB1 root@ai-server:~# diff menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org 1,2c1,2 < default=1 < timeout=10 --- > default=0 > timeout=30 root@ai-server:~# more menu.lst.01080027AA3DB1 default=1 timeout=10 min_mem64=0 title Oracle Solaris 11 11/11 Text Installer and command line kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install_media=htt p://$serverIP:5555//export/install/fcs,install_service=x86-fcs,install_svc_addre ss=$serverIP:5555 module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive title Oracle Solaris 11 11/11 Automated Install kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install=true,inst all_media=http://$serverIP:5555//export/install/fcs,install_service=x86-fcs,inst all_svc_address=$serverIP:5555,livemode=text module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive Now just boot the client off the network using PXE-boot.  For my demo purposes, that's a client from VirtualBox, of course.  That's all there's to it.  And despite the fact that this blog entry is a little longer - that wasn't that hard now, was it?

    Read the article

  • SQL SERVER – DMV – sys.dm_os_wait_stats Explanation – Wait Type – Day 3 of 28

    - by pinaldave
    The key Dynamic Management View (DMV) that helps us to understand wait stats is sys.dm_os_wait_stats; this DMV gives us all the information that we need to know regarding wait stats. However, the interpretation is left to us. This is a challenge as understanding wait stats can often be quite tricky. Anyway, we will cover few wait stats in one of the future articles. Today we will go over the basic understanding of the DMV. The Official Book OnLine Reference for DMV is over here: sys.dm_os_wait_stats. I suggest you all to refer this for all the accuracy. Following is a statement from the online book: “Specific types of wait times during query execution can indicate bottlenecks or stall points within the query. Similarly, high wait times, or wait counts server wide can indicate bottlenecks or hot spots in interaction query interactions within the server instance.” This is the statement which has inspired me to write this series. Let us first run the following statement from DMV. SELECT * FROM sys.dm_os_wait_stats ORDER BY wait_time_ms DESC GO Above statement will show us few of the columns. Here it is quick explanation of each of the column. wait_type – this is the name of the wait type. There can be three different kinds of wait types – resource, queue and external. waiting_tasks_count – this incremental counter is a good indication of frequent the wait is happening. If this number is very high, it is good indication for us to investigate that particular wait type. It is quite possible that the wait time is considerably low, but the frequency of the wait is much high. wait_time_ms – this is total wait accumulated for any type of wait. This is the total wait time and includes singal_wait_time_ms. max_wait_time_ms – this indicates the maximum wait type ever occurred for that particular wait type. Using this, one can estimate the intensity of the wait type in past. Again, it is not necessary that this max wait time will occur every time; so do not over invest yourself here. signal_wait_time_ms – this is the wait time when thread is marked as runnable and it gets to the running state. If the runnable queue is very long, you will find that this wait time becomes high. Additionally, please note that this DMV does not show current wait type or wait stats. This is cumulative view of the all the wait stats since server (instance) restarted or wait stats have been cleared. In future blog post, we will also cover two more DMVs which can be helpful to identify wait-related issues. ?sys.dm_os_waiting_tasks sys.dm_exec_requests Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • What is vt.handoff=7 parameter in grub.cfg

    - by sirkubax
    I wonder what vt.handoff=7 parameter does. I can not find any good man for that... BTW, if you have a nice descriptoon about : search --no-floppy --fs-uuid --set=root I would be happy :) grub.cfg example: menuentry 'FAILSAFE' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(hd0,msdos8)' search --no-floppy --fs-uuid --set=root 36286167-4eba-4a1e-a202-155c6baafa01 linux /boot/vmlinuz-2.6.37-12-generic root=UUID=36286167-4eba-4a1e-a202-155c6baafa01 ro vt.handoff=7 quiet splash initrd /boot/initrd.img-2.6.37-12-generic } BTW2 - i can not create tag vt.handoff ;(

    Read the article

  • Generic and type safe I/O model in any language

    - by Eduardo León
    I am looking for an I/O model, in any programming language, that is generic and type safe. By genericity, I mean there should not be separate functions for performing the same operations on different devices (read_file, read_socket, read_terminal). Instead, a single read operation works on all read-able devices, a single write operation works on all write-able devices, and so on. By type safety, I mean operations that do not make sense should not even be expressible in first place. Using the read operation on a non-read-able device ought to cause a type error at compile time, similarly for using the write operation on a non-write-able device, and so on. Is there any generic and type safe I/O model?

    Read the article

  • SharePoint Content Type Cheat Sheet

    - by Bil Simser
    PrincipleAny application or solution built in SharePoint must use a custom content type over adding columns to lists. The only exception to this is one-off solutions that have no life-cycle, proof-of-concepts, etc.Creating Content TypesWeb UI. Not portable, POC onlyC# or Declarative (XML). Must deploy these as FeaturesRuleDo not chagne the base XML for a Content Type after deploying. The only exception to this rule is that you can re-deploy a modified Content Type definition only after completely removing it from the environment (either programatically or by hand).Updating Content TypesUpdate and push down to child typesWeb UI. Manual for each environment. Document steps required for repeatability.Feature Upgrade. Preferred solution.C#. If you created the content type through code you might want to go this route. Create new modified Content Types and hide the old one. Not recommended but useful for legacy.ReferencesCreate Custom Content  Types in SharePoint 2010 (C#)Content Type Definitions  (XML)Creating Content Types (XML  and C#)Updating ApproachesUpdating Child Content TypesAgree or disagree?

    Read the article

  • SharePoint 2010 Hosting :: How to Create an External Content Type SharePoint 2010

    - by mbridge
    In this simple Article trying to show how SharePoint Designer 2010 more the External Content Type to External Database are very easy to create and can be integrated with our SharePoint Portals. You can download SharePoint Designer 2010 here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d88a1505-849b-4587-b854-a7054ee28d66&displaylang=en For this Example I will create a Database in SQL Server and will use SharePoint Designer 2010 to create the connections and use as a mirror from our SharePoint Portal using List and the Database. The first thing we need to do, is connect to SQL Server and create our Database call “Contacts” and add the Table “Contact” with the following fields.  When we create the External Content Type. We  will need to associate the Content Type, in this case i am using the Generic List, then we can create the Connection to the external Data Source. After create the Connection to the Database we can define what Columns we will use and what operations we will add our custom List. For this example i select all Operation they came default. This operation are very important because the Business rules are defined in each operation. After we create the diferent operations we can create the Custom List and define the how will be the Operation and add the Name for our custom List.  If you try to access the New Custom List Call “Custom Contact” you will see we will not have access to the Business Data Connectivity. To Resolve this issue we will need to give Access and permissions to users to the Custom External Content Type BDC connection in the Central administration.  Access to Central Administration Page and select the option “Service Application Tab> Manage Service Application”. There you select the Service “Business Data Connectivity Service” then select “Manage”.  This Option will list all External Content Type, choose the External Content Type we create and select the option “Set Object Permission”, this option will allow to add users to the BDC and manage the permissions to the Custom List.  After the correct permissions are given we can Access to Data on our custom Contact List and start creating new Item and all the other options and operation we define to the same List.  Hope you like this litle Article about connect Database Content to SharePoint Portal using the Externa Content Types and BCS.Thank you.

    Read the article

  • SQL Server and the XML Data Type : Data Manipulation

    The introduction of the xml data type, with its own set of methods for processing xml data, made it possible for SQL Server developers to create columns and variables of the type xml. Deanna Dicken examines the modify() method, which provides for data manipulation of the XML data stored in the xml data type via XML DML statements.

    Read the article

  • Spaces in type attribute for Behavior Extension Configuration Issues

    - by Shawn Cicoria
    If you’ve deployed your WCF Behavior Extension, and you get a Configuration Error, it might just be you’re lacking a space between your “Type” name and the “Assembly” name. You'll get the following error message: Verify that the extension is registered in the extension collection at system.serviceModel/extensions/behaviorExtensions So, if you’ve entered as below without <system.serviceModel> <extensions> <behaviorExtensions> <add name="appFabricE2E" type="Fabrikam.Services.AppFabricE2EBehaviorElement,Fabrikam.Services, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"/> </behaviorExtensions> </extensions> The following will work – notice the additional space between the Type name and the Assembly name: <system.serviceModel> <extensions> <behaviorExtensions> <add name="appFabricE2E" type="Fabrikam.Services.AppFabricE2EBehaviorElement,Fabrikam.Services, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"/> </behaviorExtensions> </extensions>

    Read the article

  • Using Google Tag Manager to define the page type

    - by Daffy
    So, I am looking to add a tag that I want to use for A/B testing, however we don't have a page-type URL structure. Fortunately the tool can recognise page type if I pass it by Javascript. <script type="text/javascript"> window.isProductPage = true; </script> I have been told to use the above, I have created the script in Google Tag Manager (GTM), however I now need to know how to make this run on those pages in GTM. I have looked through the code and there are div class that are unique to each page, can I use this as an indication of page type?

    Read the article

  • After MS Access Conversion 97 --> 2002 I get 'Enter Parameter Value' when exitting a form.

    - by user270370
    Hi, So when I exit a form from my newly converted .mdb it asks to Enter Parameter Value. It goes through (ie if i enter a value, it asks for another) the values required for a query that is run on a List Box on the page. The query has not been changed during the conversion. The values it is getting for the query are from text boxes on the same form. There are a few Requeries in the form (run from VB) so I imagine that it is rerunning again on Exit (although this isnt explicit in the form properties). I'm not quite sure how to go about solving this. Your help would be great. Thanks

    Read the article

  • How to infer the type of a derived class in base class?

    - by enzi
    I want to create a method that allows me to change arbitrary properties of classes that derive from my base class, the result should look like this: SetPropertyValue("size.height", 50); – where size is a property of my derived class and height is a property of size. I'm almost done with my implementation but there's one final obstacle that I want to solve before moving on, to describe this I will first have to explain my implementation a bit: Properties that can be modified are decorated with an attribute There's a method in my base class that searches for all derived classes and their decorated properties For each property I generate a "property modifier", a class that contains 2 delegates: one to set and one to get the value of the property. Property Modifiers are stored in a dictionary, with the name of the property as key In my base class, there is another dictionary that contains all property-modifier-dictionaries, with the Type of the respective class as key. What the SetPropertyValue method does is this: Get the correct property-modifier-dictionary, using the concrete type of the derived class (<- yet to solve) Get the property modifier of the property to change (e.g. of the property size) Use the get or set delegate to modify the property's value Some example code to clarify further: private static Dictionary<RuntimeTypeHandle, object> EditableTypes; //property-modifier-dictionary protected void SetPropertyValue<T>(EditablePropertyMap<T> map, string property, object value) { var property = map[property]; // get the property modifier property.Set((T)this, value); // use the set delegate (encapsulated in a method) } In the above code, T is the Type of the actual (derived) class. I need this type for the get/set delegates. The problem is how to get the EditablePropertyMap<T> when I don't know what T is. My current (ugly) solution is to pass the map in an overriden virtual method in the derived class: public override void SetPropertyValue(string property, object value) { base.SetPropertyValue((EditablePropertyMap<ExampleType>)EditableTypes[typeof(ExampleType)], property, value); } What this does is: get the correct dictionary containing the property modifiers of this class using the class's type, cast it to the appropiate type and pass it to the SetPropertyValue method. I want to get rid of the SetPropertyValue method in my derived class (since there are a lot of derived classes), but don't know yet how to accomplish that. I cannot just make a virtual GetEditablePropertyMap<T> method because I cannot infer a concrete type for T then. I also cannot acces my dictionary directly with a type and retrieve an EditablePropertyMap<T> from it because I cannot cast to it from object in the base class, since again I do not know T. I found some neat tricks to infere types (e.g. by adding a dummy T parameter), but cannot apply them to my specific problem. I'd highly appreciate any suggestions you may have for me.

    Read the article

  • Sort Grid Columns of mixed type in EXTJS Grid

    - by Amit
    Hello, I want to sort the extjs columns, I have the column type as float and from the server side i am getting values which can contain "-" value , now what happens the grid is displaying me the NaN value instead of - and the sort is not working anymore. My requirement is to create a custom sort which can sort first based on number and then sort based on string. Thanks to suggest as renderer also not works for me. My Json String is: {metaData:{"totalProperty":"total", "root":"records","fields":[{"header":"Part Number##false","name":"XJE010^VT-007!0","type":"string"},{"header":"Marketing Status##false","name":"STP716^VT-007!0","type":"string"},{"header":"Package##false","name":"XJE016^VT-007!0","type":"string"},{"header":"Automotive Grade##false","name":"STP472^VT-007!0","type":"string"},{"header":"VDSS##false","name":"XJG810^VT-007!0","type":"float"},{"header":"Drain Current (Dc)(I_D) % (A)##false","name":"XJG273^VT-006!0","type":"float"},{"header":"RDS(on) (@VGS=10V) % (&#937;)##false","name":"XJG640^VT-006!3","type":"float"},{"header":"Features##false","name":"GNP023^VT-007!0","type":"string"},{"header":"RDS(on) (@4.5 or 5V) % (&#937;)##false","name":"XJG640^VT-006!6","type":"float"},{"header":"RDS(on) (@2.7V) % (&#937;)##false","name":"XJG640^VT-006!7","type":"float"},{"header":"RDS(on) (@1.8V) % (&#937;)##false","name":"XJG640^VT-006!8","type":"float"},{"header":"Free Samples##false","name":"STP0881^VT-007!0","type":"string"},{"header":"Total Gate Charge(Qg) typ ()##true","name":"STP049^VT-002!0","type":"float"},{"header":"Total Power Dissipation(PD) % (W)##true","name":"XJG820^VT-006!0","type":"float"}]},"success":"true", "total":13,"records":[{"XJE010^VT-007!0":"STB80PF55$$/cn/analog/product/67164.jsp","STP716^VT-007!0":"Active","XJE016^VT-007!0":"D2PAK","STP472^VT-007!0":"_","XJG810^VT-007!0":"-55","XJG273^VT-006!0":"80","XJG640^VT-006!3":".018","GNP023^VT-007!0":"-","XJG640^VT-006!6":"-","XJG640^VT-006!7":"-","XJG640^VT-006!8":"-","STP0881^VT-007!0":"No","STP049^VT-002!0":"190","XJG820^VT-006!0":"300"},{"XJE010^VT-007!0":"STD10PF06$$/cn/analog/product/64543.jsp","STP716^VT-007!0":"Active","XJE016^VT-007!0":"IPAK TO-251 TO 252 DPAK","STP472^VT-007!0":"_","XJG810^VT-007!0":"-60","XJG273^VT-006!0":"-10","XJG640^VT-006!3":".2","GNP023^VT-007!0":"-","XJG640^VT-006!6":"-","XJG640^VT-006!7":"-","XJG640^VT-006!8":"-","STP0881^VT-007!0":"No ... Regards, Amit

    Read the article

  • SQL SERVER – PAGELATCH_DT, PAGELATCH_EX, PAGELATCH_KP, PAGELATCH_SH, PAGELATCH_UP – Wait Type – Day 12 of 28

    - by pinaldave
    This is another common wait type. However, I still frequently see people getting confused with PAGEIOLATCH_X and PAGELATCH_X wait types. Actually, there is a big difference between the two. PAGEIOLATCH is related to IO issues, while PAGELATCH is not related to IO issues but is oftentimes linked to a buffer issue. Before we delve deeper in this interesting topic, first let us understand what Latch is. Latches are internal SQL Server locks which can be described as very lightweight and short-term synchronization objects. Latches are not primarily to protect pages being read from disk into memory. It’s a synchronization object for any in-memory access to any portion of a log or data file.[Updated based on comment of Paul Randal] The difference between locks and latches is that locks seal all the involved resources throughout the duration of the transactions (and other processes will have no access to the object), whereas latches locks the resources during the time when the data is changed. This way, a latch is able to maintain the integrity of the data between storage engine and data cache. A latch is a short-living lock that is put on resources on buffer cache and in the physical disk when data is moved in either directions. As soon as the data is moved, the latch is released. Now, let us understand the wait stat type  related to latches. From Book On-Line: PAGELATCH_DT Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Destroy mode. PAGELATCH_EX Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Exclusive mode. PAGELATCH_KP Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Keep mode. PAGELATCH_SH Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Shared mode. PAGELATCH_UP Occurs when a task is waiting on a latch for a buffer that is not in an I/O request. The latch request is in Update mode. PAGELATCH_X Explanation: When there is a contention of access of the in-memory pages, this wait type shows up. It is quite possible that some of the pages in the memory are of very high demand. For the SQL Server to access them and put a latch on the pages, it will have to wait. This wait type is usually created at the same time. Additionally, it is commonly visible when the TempDB has higher contention as well. If there are indexes that are heavily used, contention can be created as well, leading to this wait type. Reducing PAGELATCH_X wait: The following counters are useful to understand the status of the PAGELATCH: Average Latch Wait Time (ms): The wait time for latch requests that have to wait. Latch Waits/sec: This is the number of latch requests that could not be granted immediately. Total Latch Wait Time (ms): This is the total latch wait time for latch requests in the last second. If there is TempDB contention, I suggest that you read the blog post of Robert Davis right away. He has written an excellent blog post regarding how to find out TempDB contention. The same blog post explains the terms in the allocation of GAM, SGAM and PFS. If there was a TempDB contention, Paul Randal explains the optimal settings for the TempDB in his misconceptions series. Trace Flag 1118 can be useful but use it very carefully. I totally understand that this blog post is not as clear as my other blog posts. I suggest if this wait stats is on one of your higher wait type. Do leave a comment or send me an email and I will get back to you with my solution for your situation. May the looking at all other wait stats and types together become effective as this wait type can help suggest proper bottleneck in your system. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • A simple Dynamic Proxy

    - by Abhijeet Patel
    Frameworks such as EF4 and MOQ do what most developers consider "dark magic". For instance in EF4, when you use a POCO for an entity you can opt-in to get behaviors such as "lazy-loading" and "change tracking" at runtime merely by ensuring that your type has the following characteristics: The class must be public and not sealed. The class must have a public or protected parameter-less constructor. The class must have public or protected properties Adhere to this and your type is magically endowed with these behaviors without any additional programming on your part. Behind the scenes the framework subclasses your type at runtime and creates a "dynamic proxy" which has these additional behaviors and when you navigate properties of your POCO, the framework replaces the POCO type with derived type instances. The MOQ framework does simlar magic. Let's say you have a simple interface:   public interface IFoo      {          int GetNum();      }   We can verify that the GetNum() was invoked on a mock like so:   var mock = new Mock<IFoo>(MockBehavior.Default);   mock.Setup(f => f.GetNum());   var num = mock.Object.GetNum();   mock.Verify(f => f.GetNum());   Beind the scenes the MOQ framework is generating a dynamic proxy by implementing IFoo at runtime. the call to moq.Object returns the dynamic proxy on which we then call "GetNum" and then verify that this method was invoked. No dark magic at all, just clever programming is what's going on here, just not visible and hence appears magical! Let's create a simple dynamic proxy generator which accepts an interface type and dynamically creates a proxy implementing the interface type specified at runtime.     public static class DynamicProxyGenerator   {       public static T GetInstanceFor<T>()       {           Type typeOfT = typeof(T);           var methodInfos = typeOfT.GetMethods();           AssemblyName assName = new AssemblyName("testAssembly");           var assBuilder = AppDomain.CurrentDomain.DefineDynamicAssembly(assName, AssemblyBuilderAccess.RunAndSave);           var moduleBuilder = assBuilder.DefineDynamicModule("testModule", "test.dll");           var typeBuilder = moduleBuilder.DefineType(typeOfT.Name + "Proxy", TypeAttributes.Public);              typeBuilder.AddInterfaceImplementation(typeOfT);           var ctorBuilder = typeBuilder.DefineConstructor(                     MethodAttributes.Public,                     CallingConventions.Standard,                     new Type[] { });           var ilGenerator = ctorBuilder.GetILGenerator();           ilGenerator.EmitWriteLine("Creating Proxy instance");           ilGenerator.Emit(OpCodes.Ret);           foreach (var methodInfo in methodInfos)           {               var methodBuilder = typeBuilder.DefineMethod(                   methodInfo.Name,                   MethodAttributes.Public | MethodAttributes.Virtual,                   methodInfo.ReturnType,                   methodInfo.GetParameters().Select(p => p.GetType()).ToArray()                   );               var methodILGen = methodBuilder.GetILGenerator();               methodILGen.EmitWriteLine("I'm a proxy");               if (methodInfo.ReturnType == typeof(void))               {                   methodILGen.Emit(OpCodes.Ret);               }               else               {                   if (methodInfo.ReturnType.IsValueType || methodInfo.ReturnType.IsEnum)                   {                       MethodInfo getMethod = typeof(Activator).GetMethod(/span>"CreateInstance",new Type[]{typeof((Type)});                                               LocalBuilder lb = methodILGen.DeclareLocal(methodInfo.ReturnType);                       methodILGen.Emit(OpCodes.Ldtoken, lb.LocalType);                       methodILGen.Emit(OpCodes.Call, typeofype).GetMethod("GetTypeFromHandle"));  ));                       methodILGen.Emit(OpCodes.Callvirt, getMethod);                       methodILGen.Emit(OpCodes.Unbox_Any, lb.LocalType);                                                              }                 else                   {                       methodILGen.Emit(OpCodes.Ldnull);                   }                   methodILGen.Emit(OpCodes.Ret);               }               typeBuilder.DefineMethodOverride(methodBuilder, methodInfo);           }                     Type constructedType = typeBuilder.CreateType();           var instance = Activator.CreateInstance(constructedType);           return (T)instance;       }   }   Dynamic proxies are created by calling into the following main types: AssemblyBuilder, TypeBuilder, Modulebuilder and ILGenerator. These types enable dynamically creating an assembly and emitting .NET modules and types in that assembly, all using IL instructions. Let's break down the code above a bit and examine it piece by piece                Type typeOfT = typeof(T);              var methodInfos = typeOfT.GetMethods();              AssemblyName assName = new AssemblyName("testAssembly");              var assBuilder = AppDomain.CurrentDomain.DefineDynamicAssembly(assName, AssemblyBuilderAccess.RunAndSave);              var moduleBuilder = assBuilder.DefineDynamicModule("testModule", "test.dll");              var typeBuilder = moduleBuilder.DefineType(typeOfT.Name + "Proxy", TypeAttributes.Public);   We are instructing the runtime to create an assembly caled "test.dll"and in this assembly we then emit a new module called "testModule". We then emit a new type definition of name "typeName"Proxy into this new module. This is the definition for the "dynamic proxy" for type T                 typeBuilder.AddInterfaceImplementation(typeOfT);               var ctorBuilder = typeBuilder.DefineConstructor(                         MethodAttributes.Public,                         CallingConventions.Standard,                         new Type[] { });               var ilGenerator = ctorBuilder.GetILGenerator();               ilGenerator.EmitWriteLine("Creating Proxy instance");               ilGenerator.Emit(OpCodes.Ret);   The newly created type implements type T and defines a default parameterless constructor in which we emit a call to Console.WriteLine. This call is not necessary but we do this so that we can see first hand that when the proxy is constructed, when our default constructor is invoked.   var methodBuilder = typeBuilder.DefineMethod(                      methodInfo.Name,                      MethodAttributes.Public | MethodAttributes.Virtual,                      methodInfo.ReturnType,                      methodInfo.GetParameters().Select(p => p.GetType()).ToArray()                      );   We then iterate over each method declared on type T and add a method definition of the same name into our "dynamic proxy" definition     if (methodInfo.ReturnType == typeof(void))   {       methodILGen.Emit(OpCodes.Ret);   }   If the return type specified in the method declaration of T is void we simply return.     if (methodInfo.ReturnType.IsValueType || methodInfo.ReturnType.IsEnum)   {                               MethodInfo getMethod = typeof(Activator).GetMethod("CreateInstance",                                                         new Type[]{typeof(Type)});                               LocalBuilder lb = methodILGen.DeclareLocal(methodInfo.ReturnType);                                                     methodILGen.Emit(OpCodes.Ldtoken, lb.LocalType);       methodILGen.Emit(OpCodes.Call, typeof(Type).GetMethod("GetTypeFromHandle"));       methodILGen.Emit(OpCodes.Callvirt, getMethod);       methodILGen.Emit(OpCodes.Unbox_Any, lb.LocalType);   }   If the return type in the method declaration of T is either a value type or an enum, then we need to create an instance of the value type and return that instance the caller. In order to accomplish that we need to do the following: 1) Get a handle to the Activator.CreateInstance method 2) Declare a local variable which represents the Type of the return type(i.e the type object of the return type) specified on the method declaration of T(obtained from the MethodInfo) and push this Type object onto the evaluation stack. In reality a RuntimeTypeHandle is what is pushed onto the stack. 3) Invoke the "GetTypeFromHandle" method(a static method in the Type class) passing in the RuntimeTypeHandle pushed onto the stack previously as an argument, the result of this invocation is a Type object (representing the method's return type) which is pushed onto the top of the evaluation stack. 4) Invoke Activator.CreateInstance passing in the Type object from step 3, the result of this invocation is an instance of the value type boxed as a reference type and pushed onto the top of the evaluation stack. 5) Unbox the result and place it into the local variable of the return type defined in step 2   methodILGen.Emit(OpCodes.Ldnull);   If the return type is a reference type then we just load a null onto the evaluation stack   methodILGen.Emit(OpCodes.Ret);   Emit a a return statement to return whatever is on top of the evaluation stack(null or an instance of a value type) back to the caller     Type constructedType = typeBuilder.CreateType();   var instance = Activator.CreateInstance(constructedType);   return (T)instance;   Now that we have a definition of the "dynamic proxy" implementing all the methods declared on T, we can now create an instance of the proxy type and return that out typed as T. The caller can now invoke the generator and request a dynamic proxy for any type T. In our example when the client invokes GetNum() we get back "0". Lets add a new method on the interface called DayOfWeek GetDay()   public interface IFoo      {          int GetNum();          DayOfWeek GetDay();      }   When GetDay() is invoked, the "dynamic proxy" returns "Sunday" since that is the default value for the DayOfWeek enum This is a very trivial example of dynammic proxies, frameworks like MOQ have a way more sophisticated implementation of this paradigm where in you can instruct the framework to create proxies which return specified values for a method implementation.

    Read the article

  • Exadata Parameter _AUTO_MANAGE_EXADATA_DISKS

    - by AVargas
    Normal 0 false false false EN-US X-NONE HE MicrosoftInternetExplorer4 DefSemiHidden="true" DefQFormat="false" DefPriority="99" LatentStyleCount="267" UnhideWhenUsed="false" QFormat="true" Name="Normal"/ UnhideWhenUsed="false" QFormat="true" Name="heading 1"/ UnhideWhenUsed="false" QFormat="true" Name="Title"/ UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/ UnhideWhenUsed="false" QFormat="true" Name="Strong"/ UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/ UnhideWhenUsed="false" Name="Table Grid"/ UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/ UnhideWhenUsed="false" Name="Light Shading"/ UnhideWhenUsed="false" Name="Light List"/ UnhideWhenUsed="false" Name="Light Grid"/ UnhideWhenUsed="false" Name="Medium Shading 1"/ UnhideWhenUsed="false" Name="Medium Shading 2"/ UnhideWhenUsed="false" Name="Medium List 1"/ UnhideWhenUsed="false" Name="Medium List 2"/ UnhideWhenUsed="false" Name="Medium Grid 1"/ UnhideWhenUsed="false" Name="Medium Grid 2"/ UnhideWhenUsed="false" Name="Medium Grid 3"/ UnhideWhenUsed="false" Name="Dark List"/ UnhideWhenUsed="false" Name="Colorful Shading"/ UnhideWhenUsed="false" Name="Colorful List"/ UnhideWhenUsed="false" Name="Colorful Grid"/ UnhideWhenUsed="false" Name="Light Shading Accent 1"/ UnhideWhenUsed="false" Name="Light List Accent 1"/ UnhideWhenUsed="false" Name="Light Grid Accent 1"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/ UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/ UnhideWhenUsed="false" QFormat="true" Name="Quote"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/ UnhideWhenUsed="false" Name="Dark List Accent 1"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/ UnhideWhenUsed="false" Name="Colorful List Accent 1"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/ UnhideWhenUsed="false" Name="Light Shading Accent 2"/ UnhideWhenUsed="false" Name="Light List Accent 2"/ UnhideWhenUsed="false" Name="Light Grid Accent 2"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/ UnhideWhenUsed="false" Name="Dark List Accent 2"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/ UnhideWhenUsed="false" Name="Colorful List Accent 2"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/ UnhideWhenUsed="false" Name="Light Shading Accent 3"/ UnhideWhenUsed="false" Name="Light List Accent 3"/ UnhideWhenUsed="false" Name="Light Grid Accent 3"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/ UnhideWhenUsed="false" Name="Dark List Accent 3"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/ UnhideWhenUsed="false" Name="Colorful List Accent 3"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/ UnhideWhenUsed="false" Name="Light Shading Accent 4"/ UnhideWhenUsed="false" Name="Light List Accent 4"/ UnhideWhenUsed="false" Name="Light Grid Accent 4"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/ UnhideWhenUsed="false" Name="Dark List Accent 4"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/ UnhideWhenUsed="false" Name="Colorful List Accent 4"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/ UnhideWhenUsed="false" Name="Light Shading Accent 5"/ UnhideWhenUsed="false" Name="Light List Accent 5"/ UnhideWhenUsed="false" Name="Light Grid Accent 5"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/ UnhideWhenUsed="false" Name="Dark List Accent 5"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/ UnhideWhenUsed="false" Name="Colorful List Accent 5"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/ UnhideWhenUsed="false" Name="Light Shading Accent 6"/ UnhideWhenUsed="false" Name="Light List Accent 6"/ UnhideWhenUsed="false" Name="Light Grid Accent 6"/ UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/ UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/ UnhideWhenUsed="false" Name="Dark List Accent 6"/ UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/ UnhideWhenUsed="false" Name="Colorful List Accent 6"/ UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/ UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/ UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/ UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/ UnhideWhenUsed="false" QFormat="true" Name="Book Title"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi;} Exadata auto disk management is controlled by the parameter _AUTO_MANAGE_EXADATA_DISKS. The default value for this parameter is TRUE.When _AUTO_MANAGE_EXADATA_DISKS is enabled, Exadata automate the following disk operations:If a griddisk becomes unavailable/available, ASM will OFFLINE/ONLINE it.If a physicaldisk fails or its status change to predictive failure, for all griddisks built on it ASM will DROP FORCE the failed ones and DROP the ones with predictive failures.If a flashdisk performance degrades, if there are griddisks built on it, they will be DROPPED FORCE in ASM.If a physicaldisk is replaced, the celldisk and griddisks will be recreated and the griddisks will be automatically ADDED in ASM, if they were automatically dropped by ASM. If you manually drop the disks, that will not happen.If a NORMAL, ONLINE griddisk is manually dropped, FORCE option should not be used, otherwise the disk will be automatically added back in ASM. If a gridisk is inactivated, ASM will automatically OFFLINE it.If a gridisk is activated, ASM will automatically ONLINED it. There are some error conditions that may require to temporarily disable _AUTO_MANAGE_EXADATA_DISKS.Details on MOS 1408865.1 - Exadata Auto Disk Management Add disk failing and ASM Rebalance interrupted with error ORA-15074. Immediately after taking care of the problem _AUTO_MANAGE_EXADATA_DISKS should be set back to its default value of TRUE. Full details on Auto disk management feature in Exadata (Doc ID 1484274.1)

    Read the article

  • Select comma separated result from via comma separated parameter

    - by Rodney Vinyard
    Select comma separated result from via comma separated parameter PROCEDURE [dbo].[GetCommaSepStringsByCommaSepNumericIds] (@CommaSepNumericIds varchar(max))   AS   BEGIN   /* exec GetCommaSepStringsByCommaSepNumericIds '1xx1, 1xx2, 1xx3' */ DECLARE @returnCommaSepIds varchar(max); with cte as ( select distinct Left(qc.myString, 1) + '-' + substring(qc.myString, 2, 9) + '-' + substring(qc.myString, 11, 7) as myString from q_CoaRequestCompound qc               JOIN               dbo.SplitStringToNumberTable(@CommaSepNumericIds) AS s               ON               qc.q_CoaRequestId = s.ID where SUBSTRING(upper(myString), 1, 1) in('L', '?') ) SELECT @returnCommaSepIds = COALESCE(@returnCommaSepIds + ''',''', '''') + CAST(myString AS varchar(2x)) FROM cte;   set @returnCommaSepIds = @returnCommaSepIds + '''' SELECT @returnCommaSepIds   End   FUNCTION [dbo].[SplitStringToNumberTable] (        @commaSeparatedList varchar(max) ) RETURNS @outTable table (        ID int ) AS BEGIN        DECLARE @parsedItem varchar(10), @Pos int          SET @commaSeparatedList = LTRIM(RTRIM(@commaSeparatedList))+ ','        SET @commaSeparatedList = REPLACE(@commaSeparatedList, ' ', '')        SET @Pos = CHARINDEX(',', @commaSeparatedList, 1)          IF REPLACE(@commaSeparatedList, ',', '') <> ''        BEGIN               WHILE @Pos > 0               BEGIN                      SET @parsedItem = LTRIM(RTRIM(LEFT(@commaSeparatedList, @Pos - 1)))                      IF @parsedItem <> ''                            BEGIN                                   INSERT INTO @outTable(ID)                                   VALUES (CAST(@parsedItem AS int)) --Use Appropriate conversion                            END                            SET @commaSeparatedList = RIGHT(@commaSeparatedList, LEN(@commaSeparatedList) - @Pos)                            SET @Pos = CHARINDEX(',', @commaSeparatedList, 1)               END        END           RETURN END

    Read the article

  • kernel mem parameter

    - by Ashfame
    As a last resort to my question, I am yet to try the mem parameter of kernel to force it to use the specified amount of RAM. Short Summary - I can only see 3.2GB RAM on a 64bit OS and am not sure ifs a hardware limitation, so wants to try as I found a post on Ubuntuforums. My question is if its ok to play with my resident Ubuntu install or should I be using a live bootable usb? What values do I try (I have 6GB with only 3.2GB being usable) and how to keep it safe? I don't want to burn any of my hardware component at this point of time or make the system unbootable. Running Ubuntu 11.10 with kernel 3.0.0-13-generic

    Read the article

  • What's the difference between the input type "text" and "password" in an html form?

    - by Domingo
    Hi everybody, this question might seem stupid, but here's the situation: I'm trying to create an auto login page for my mail using jquery's post request, but it's not working, it works with all other pages except with webmail. So, trying to figure out what was wrong, I recreated the login form, here's the code: <form id="form1" name="form1" method="post" action="https://login.hostmonster.com/"> <label>User <input type="text" name="login" id="user" /> </label> <label>Pass <input name="password" type="password" id="pass" /> </label> <input name="doLogin" type="submit" id="doLogin" value="Login"> </form> The strange thing is when you change the input type of pass to text, the form doesn't work! I can't figure out why. Anyway, if you can tell me what's the real difference between the input type text and password (and not what it says everywhere on the net that the only difference is that when you type stars appear instead of characters) I would appreciate it. Also, do you think this is affecting my jquery's post? Here's the code for it: $j.post('https://login.hostmonster.com/', { login: '[email protected]', password: 'xxx' }, function(data, text){ if (text=='success') { alert('Success '+data); } else { alert('Failed'); } }); Thanks a lot! Regards, D

    Read the article

  • Change a File Type’s Icon in Windows 7

    - by Trevor Bekolay
    In Windows XP, you could change the icon associated with a file type in Windows Explorer. In Windows 7, you have to do some registry hacking to change a file type’s icon. We’ll show you a much easier and faster method for Windows 7. File Types Manager File Types Manager is a great little utility from NirSoft that includes the functionality of Windows XP’s folder options and adds a whole lot more. It works great in Windows 7, and its interface makes it easy to change a bunch of related file types at once. A common problem we run into are icons that look too similar. You have to look for a few seconds to see the difference between the movies and the text files. Let’s change the icon for the movie files to make visually scanning through directories much easier. Open up File Types Manager. Find the “Default Icon” column and click on it to sort the list by the Default Icon. (We’ve hidden a bunch of columns we don’t need, so you may find it to be farther to the right.) This groups together all file extensions that already have the same icon. This is convenient because we want to change the icon of all video files, which at the moment all have the same default icon. Click the “Find” button on the toolbar, of press Ctrl+F. Type in a file type that you want to change. Note that all of the extensions with the same default icon are grouped together. Right click on the first extension whose icon you want to change and click on Edit Selected File Type, or select the first extension and press F2. Click the “…” button next to the Default Icon text field. Click on the Browse… button. File Types Manager allows you to select .exe, .dll, or .ico files. In our case, we have a .ico file that we took from the wonderful public domain Tango icon library. Select the appropriate icon (if you’re using a .exe or .dll there could be many possible icons) then click OK. Repeat this process for each extension whose icon you would like to change. Now it’s much easier to see at a glance which files are movies and which are text files! Of course, this process will work for any file type, so customize your files’ icons as you see fit. Download File Types Manager from NirSoft for Windows Similar Articles Productive Geek Tips Change the Default Editor for Batch Files in VistaCustomizing Your Icons in Windows XPChange Your Windows 7 Library Icons the Easy WayRestore Missing Desktop Icons in Windows 7 or VistaCustomize Your Folder Icons in Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Scan your PC for nasties with Panda ActiveScan CleanMem – Memory Cleaner AceStock – The Personal Stock Monitor Add Multiple Tabs to Office Programs The Wearing of the Green – St. Patrick’s Day Theme (Firefox) Perform a Background Check on Yourself

    Read the article

  • Illegal characters for SharePoint 2010 Content Type name

    - by Kelly Jones
    Quick tip: you can’t include a backslash in the name of the SharePoint 2010 Content Type.  In fact, there are several illegal characters:  \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab. What, you didn’t know that after entering one of these characters in the name?  Is it because you saw this screen: Oh, that’s right….you need to turn off custom errors in the layouts folder…See this blog post for details and you’ll also need to turn off for the web application. Once you do that, you’ll see this: I wonder why the SharePoint team just doesn’t let the user know that the content type name contains illegal characters before the user hits the create button. Here’s a copy of the complete error (for the search engines): Server Error in '/' Application. -------------------------------------------------------------------------------- The content type name 'asdfadsf\asdfasf' cannot contain: \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: Microsoft.SharePoint.SPInvalidContentTypeNameException: The content type name 'asdfadsf\asdfasf' cannot contain: \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.  Stack Trace: [SPInvalidContentTypeNameException: The content type name 'asdfadsf\asdfasf' cannot contain: \  / : * ? " # % < > { } | ~ & , two consecutive periods (..), or special characters such as a tab.]    Microsoft.SharePoint.SPContentType.ValidateName(String name) +27419522    Microsoft.SharePoint.SPContentType.ValidateNameWithResource(String strVal, String& strLocalized) +423    Microsoft.SharePoint.SPContentType.set_Name(String value) +151    Microsoft.SharePoint.SPContentType.Initialize(SPContentType parentContentType, SPContentTypeCollection collection, String name) +112    Microsoft.SharePoint.SPContentType..ctor(SPContentType parentContentType, SPContentTypeCollection collection, String name) +132    Microsoft.SharePoint.ApplicationPages.ContentTypeCreatePage.BtnOK_Click(Object sender, EventArgs e) +497    System.Web.UI.WebControls.Button.OnClick(EventArgs e) +115    System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +140    System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +29    System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2981   -------------------------------------------------------------------------------- Version Information: Microsoft .NET Framework Version:2.0.50727.4927; ASP.NET Version:2.0.50727.4927

    Read the article

  • SPSiteDataQuery Returns Only One List Type At A Time

    - by Brian Jackett
    The SPSiteDataQuery class in SharePoint 2007 is very powerful, but it has a few limitations.  One of these limitations that I ran into this morning (and caused hours of frustration) is that you can only return results from one list type at a time.  For example, if you are trying to query items from an out of the box custom list (list type = 100) and document library (list type = 101) you will only get items from the custom list (SPSiteDataQuery defaults to list type = 100.)  In my situation I was attempting to query multiple lists (created from custom list templates 10001 and 10002) each with their own content types. Solution     Since I am only able to return results from one list type at a time, I was forced to run my query twice with each time setting the ServerTemplate (translates to ListTemplateId if you are defining custom list templates) before executing the query.  Below is a snippet of the code to accomplish this. SPSiteDataQuery spDataQuery = new SPSiteDataQuery(); spDataQuery.Lists = "<Lists ServerTemplate='10001' />"; // ... set rest of properties for spDataQuery   var results = SPContext.Current.Web.GetSiteData(spDataQuery).AsEnumerable();   // only change to SPSiteDataQuery is Lists property for ServerTemplate attribute spDataQuery.Lists = "<Lists ServerTemplate='10002' />";   // re-execute query and concatenate results to existing entity results = results.Concat(SPContext.Current.Web.GetSiteData(spDataQuery).AsEnumerable());   Conclusion     Overall this isn’t an elegant solution, but it’s a workaround for a limitation with the SPSiteDataQuery.  I am now able to return data from multiple lists spread across various list templates.  I’d like to thank those who commented on this MSDN page that finally pointed out the limitation to me.  Also a thanks out to Mark Rackley for “name dropping” me in his latest article (which I humbly insist I don’t belong in such company)  as well as encouraging me to write up a quick post on this issue above despite my busy schedule.  Hopefully this post saves some of you from the frustrations I experienced this morning using the SPSiteDataQuery.  Until next time, Happy SharePoint’ing all.         -Frog Out   Links MSDN Article for SPSiteDataQuery http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spsitedataquery.lists.aspx

    Read the article

  • SQL SERVER – Quiz and Video – Introduction to Discovering XML Data Type Methods

    - by pinaldave
    This blog post is inspired from SQL Interoperability Joes 2 Pros: A Guide to Integrating SQL Server with XML, C#, and PowerShell – SQL Exam Prep Series 70-433 – Volume 5. [Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to Discovering XML Data Type Methods – A Primer. In the article we discussed various basics terminology of the XML. The article further covers following important concepts of XML. What are XML Data Type Methods The query() Method The value() Method The exist() Method The modify() Method Above five are the most important concepts related to XML and SQL Server. There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced  concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1.) Which method returns an XML fragment from the source XML? query( ) value( ) exist( ) modify( ) All of them Only query( ) and value( ) 2.) Which XML data type method returns a “1” if found and “0” if the specified XPath is not found in the source XML? query( ) value( ) exist( ) modify( ) All of them Only query( ) and value( ) 3.) Which XML data type method allows you to pick the data type of the value that is returned from the source XML? query( ) value( ) exist( ) modify( ) All of them Only query( ) and value( ) 4.) Which method will not work with a SQL SELECT statement? query( ) value( ) exist( ) modify( ) All of them Only query( ) and value( ) Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 1 2) 3 3) 2 4) 4 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Reporting Services 2005 - Parameter reliant on cascading parameters

    - by sHr0oMaN
    Good day I have the following: In a SSRS 2005 report I have three report parameters: FinancialPeriodType ("Month" or "Week" in a DropDownList), FinancialPeriod (cascading DropDownList populated depending on first selection) and another parameter, OpeningBalance, of type float. The first two parameters are cascading i.e. the first parameter is used by the query populating the second's available values. This works fine. What I'm attemping to do is default the value of OpeningBalance to a value from a dataset populated by a stored procedure which takes in the first two parameters. However as soon as I select a value for the first parameter, I get the following error: An error occurred during report processing. The value for the report parameter 'OpeningBalance' is not valid for its type.' I've tried setting the default of the second parameter to be a meaningful default (something like 200901) as well as defaulting the second parameter in the SQL store procedure with no affect. Using SQL Profiler I've noticed that selecting a value for the first parameter doesn't even execute the SQL used to obtain available values for the second parameter.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >