Search Results

Search found 5301 results on 213 pages for 'unknown (yahoo)'.

Page 178/213 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • .htaccess template, suggestions needed.

    - by purpler
    I compiled myself a .htaccess template and would like to know whether the caching and compressions is set up right, constructive suggestions and critics needed. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US FileETag None Header unset ETag ServerSignature Off SetEnv TZ Europe/Belgrade # Rewrites Options +FollowSymLinks RewriteEngine On RewriteBase / # Redirect to WWW RewriteCond %{HTTP_HOST} ^serpentineseo.com RewriteRule (.*) http://www.serpentineseo.com/$1 [R=301,L] # Redirect index to root RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*index\.html\ HTTP/ RewriteRule ^(.*)index\.html$ /$1 [R=301,L] # Cache media files: ExpiresActive On ExpiresDefault A0 # Month <filesMatch "\.(gif|jpg|jpeg|png|ico|swf|js)$"> Header set Cache-Control "max-age=2592000, public" </filesMatch> # Week <FilesMatch "\.(css|pdf)$"> Header set Cache-Control "max-age=604800" </FilesMatch> # 10 Min <FilesMatch "\.(html|htm|txt)$"> Header set Cache-Control "max-age=600" </FilesMatch> # Do not cache <FilesMatch "\.(pl|php|cgi|spl|scgi|fcgi)$"> Header unset Cache-Control </FilesMatch> # Compress output <IfModule mod_deflate.c> <FilesMatch "\.(html|js|css)$"> SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error Documents ErrorDocument 206 /error/206.html ErrorDocument 401 /error/401.html ErrorDocument 403 /error/403.html ErrorDocument 404 /error/404.html ErrorDocument 500 /error/500.html # Prevent hotlinking RewriteCond %{HTTP_REFERER} !^$ RewriteCond %{HTTP_REFERER} !^http://(www\.)?serpentineseo.com/.*$ [NC] RewriteRule \.(gif|jpg|png)$ http://www.serpentineseo.com/images/angryman.png [R,L] # Prevent offline browsers RewriteCond %{HTTP_USER_AGENT} ^BlackWidow [OR] RewriteCond %{HTTP_USER_AGENT} ^Bot\ mailto:[email protected] [OR] RewriteCond %{HTTP_USER_AGENT} ^ChinaClaw [OR] RewriteCond %{HTTP_USER_AGENT} ^Custo [OR] RewriteCond %{HTTP_USER_AGENT} ^DISCo [OR] RewriteCond %{HTTP_USER_AGENT} ^Download\ Demon [OR] RewriteCond %{HTTP_USER_AGENT} ^eCatch [OR] RewriteCond %{HTTP_USER_AGENT} ^EirGrabber [OR] RewriteCond %{HTTP_USER_AGENT} ^EmailSiphon [OR] RewriteCond %{HTTP_USER_AGENT} ^EmailWolf [OR] RewriteCond %{HTTP_USER_AGENT} ^Express\ WebPictures [OR] RewriteCond %{HTTP_USER_AGENT} ^ExtractorPro [OR] RewriteCond %{HTTP_USER_AGENT} ^EyeNetIE [OR] RewriteCond %{HTTP_USER_AGENT} ^FlashGet [OR] RewriteCond %{HTTP_USER_AGENT} ^GetRight [OR] RewriteCond %{HTTP_USER_AGENT} ^GetWeb! [OR] RewriteCond %{HTTP_USER_AGENT} ^Go!Zilla [OR] RewriteCond %{HTTP_USER_AGENT} ^Go-Ahead-Got-It [OR] RewriteCond %{HTTP_USER_AGENT} ^GrabNet [OR] RewriteCond %{HTTP_USER_AGENT} ^Grafula [OR] RewriteCond %{HTTP_USER_AGENT} ^HMView [OR] RewriteCond %{HTTP_USER_AGENT} HTTrack [NC,OR] RewriteCond %{HTTP_USER_AGENT} ^Image\ Stripper [OR] RewriteCond %{HTTP_USER_AGENT} ^Image\ Sucker [OR] RewriteCond %{HTTP_USER_AGENT} Indy\ Library [NC,OR] RewriteCond %{HTTP_USER_AGENT} ^InterGET [OR] RewriteCond %{HTTP_USER_AGENT} ^Internet\ Ninja [OR] RewriteCond %{HTTP_USER_AGENT} ^JetCar [OR] RewriteCond %{HTTP_USER_AGENT} ^JOC\ Web\ Spider [OR] RewriteCond %{HTTP_USER_AGENT} ^larbin [OR] RewriteCond %{HTTP_USER_AGENT} ^LeechFTP [OR] RewriteCond %{HTTP_USER_AGENT} ^Mass\ Downloader [OR] RewriteCond %{HTTP_USER_AGENT} ^MIDown\ tool [OR] RewriteCond %{HTTP_USER_AGENT} ^Mister\ PiX [OR] RewriteCond %{HTTP_USER_AGENT} ^Navroad [OR] RewriteCond %{HTTP_USER_AGENT} ^NearSite [OR] RewriteCond %{HTTP_USER_AGENT} ^NetAnts [OR] RewriteCond %{HTTP_USER_AGENT} ^NetSpider [OR] RewriteCond %{HTTP_USER_AGENT} ^Net\ Vampire [OR] RewriteCond %{HTTP_USER_AGENT} ^NetZIP [OR] RewriteCond %{HTTP_USER_AGENT} ^Octopus [OR] RewriteCond %{HTTP_USER_AGENT} ^Offline\ Explorer [OR] RewriteCond %{HTTP_USER_AGENT} ^Offline\ Navigator [OR] RewriteCond %{HTTP_USER_AGENT} ^PageGrabber [OR] RewriteCond %{HTTP_USER_AGENT} ^Papa\ Foto [OR] RewriteCond %{HTTP_USER_AGENT} ^pavuk [OR] RewriteCond %{HTTP_USER_AGENT} ^pcBrowser [OR] RewriteCond %{HTTP_USER_AGENT} ^RealDownload [OR] RewriteCond %{HTTP_USER_AGENT} ^ReGet [OR] RewriteCond %{HTTP_USER_AGENT} ^SiteSnagger [OR] RewriteCond %{HTTP_USER_AGENT} ^SmartDownload [OR] RewriteCond %{HTTP_USER_AGENT} ^SuperBot [OR] RewriteCond %{HTTP_USER_AGENT} ^SuperHTTP [OR] RewriteCond %{HTTP_USER_AGENT} ^Surfbot [OR] RewriteCond %{HTTP_USER_AGENT} ^tAkeOut [OR] RewriteCond %{HTTP_USER_AGENT} ^Teleport\ Pro [OR] RewriteCond %{HTTP_USER_AGENT} ^VoidEYE [OR] RewriteCond %{HTTP_USER_AGENT} ^Web\ Image\ Collector [OR] RewriteCond %{HTTP_USER_AGENT} ^Web\ Sucker [OR] RewriteCond %{HTTP_USER_AGENT} ^WebAuto [OR] RewriteCond %{HTTP_USER_AGENT} ^WebCopier [OR] RewriteCond %{HTTP_USER_AGENT} ^WebFetch [OR] RewriteCond %{HTTP_USER_AGENT} ^WebGo\ IS [OR] RewriteCond %{HTTP_USER_AGENT} ^WebLeacher [OR] RewriteCond %{HTTP_USER_AGENT} ^WebReaper [OR] RewriteCond %{HTTP_USER_AGENT} ^WebSauger [OR] RewriteCond %{HTTP_USER_AGENT} ^Website\ eXtractor [OR] RewriteCond %{HTTP_USER_AGENT} ^Website\ Quester [OR] RewriteCond %{HTTP_USER_AGENT} ^WebStripper [OR] RewriteCond %{HTTP_USER_AGENT} ^WebWhacker [OR] RewriteCond %{HTTP_USER_AGENT} ^WebZIP [OR] RewriteCond %{HTTP_USER_AGENT} ^Wget [OR] RewriteCond %{HTTP_USER_AGENT} ^Widow [OR] RewriteCond %{HTTP_USER_AGENT} ^WWWOFFLE [OR] RewriteCond %{HTTP_USER_AGENT} ^Xaldon\ WebSpider [OR] RewriteCond %{HTTP_USER_AGENT} ^Zeus RewriteRule ^.*$ http://www.google.com [R,L] # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Deny access to sensitive files <FilesMatch "\.(htaccess|psd|log)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • REST to Objects in C#

    RESTful interfaces for web services are all the rage for many Web 2.0 sites.  If you want to consume these in a very simple fashion, LINQ to XML can do the job pretty easily in C#.  If you go searching for help on this, youll find a lot of incomplete solutions and fairly large toolkits and frameworks (guess how I know this) this quick article is meant to be a no fluff just stuff approach to making this work. POCO Objects Lets assume you have a Model that you want to suck data into from a RESTful web service.  Ideally this is a Plain Old CLR Object, meaning it isnt infected with any persistence or serialization goop.  It might look something like this: public class Entry { public int Id; public int UserId; public DateTime Date; public float Hours; public string Notes; public bool Billable;   public override string ToString() { return String.Format("[{0}] User: {1} Date: {2} Hours: {3} Notes: {4} Billable {5}", Id, UserId, Date, Hours, Notes, Billable); } } Not that this isnt a completely trivial object.  Lets look at the API for the service.  RESTful HTTP Service In this case, its TickSpots API, with the following sample output: <?xml version="1.0" encoding="UTF-8"?> <entries type="array"> <entry> <id type="integer">24</id> <task_id type="integer">14</task_id> <user_id type="integer">3</user_id> <date type="date">2008-03-08</date> <hours type="float">1.00</hours> <notes>Had trouble with tribbles.</notes> <billable>true</billable> # Billable is an attribute inherited from the task <billed>true</billed> # Billed is an attribute to track whether the entry has been invoiced <created_at type="datetime">Tue, 07 Oct 2008 14:46:16 -0400</created_at> <updated_at type="datetime">Tue, 07 Oct 2008 14:46:16 -0400</updated_at> # The following attributes are derived and provided for informational purposes: <user_email>[email protected]</user_email> <task_name>Remove converter assembly</task_name> <sum_hours type="float">2.00</sum_hours> <budget type="float">10.00</budget> <project_name>Realign dilithium crystals</project_name> <client_name>Starfleet Command</client_name> </entry> </entries> Im assuming in this case that I dont necessarily care about all of the data fields the service is returning I just need some of them for my applications purposes.  Thus, you can see there are more elements in the <entry> XML than I have in my Entry class. Get The XML with C# The next step is to get the XML.  The following snippet does the heavy lifting once you pass it the appropriate URL: protected XElement GetResponse(string uri) { var request = WebRequest.Create(uri) as HttpWebRequest; request.UserAgent = ".NET Sample"; request.KeepAlive = false;   request.Timeout = 15 * 1000;   var response = request.GetResponse() as HttpWebResponse;   if (request.HaveResponse == true && response != null) { var reader = new StreamReader(response.GetResponseStream()); return XElement.Parse(reader.ReadToEnd()); } throw new Exception("Error fetching data."); } This is adapted from the Yahoo Developer article on Web Service REST calls.  Once you have the XML, the last step is to get the data back as your POCO. Use LINQ-To-XML to Deserialize POCOs from XML This is done via the following code: public IEnumerable<Entry> List(DateTime startDate, DateTime endDate) { string additionalParameters = String.Format("start_date={0}&end_date={1}", startDate.ToShortDateString(), endDate.ToShortDateString()); string uri = BuildUrl("entries", additionalParameters);   XElement elements = GetResponse(uri);   var entries = from e in elements.Elements() where e.Name.LocalName == "entry" select new Entry { Id = int.Parse(e.Element("id").Value), UserId = int.Parse(e.Element("user_id").Value), Date = DateTime.Parse(e.Element("date").Value), Hours = float.Parse(e.Element("hours").Value), Notes = e.Element("notes").Value, Billable = bool.Parse(e.Element("billable").Value) }; return entries; }   For completeness, heres the BuildUrl method for my TickSpot API wrapper: // Change these to your settings protected const string projectDomain = "DOMAIN.tickspot.com"; private const string authParams = "[email protected]&password=MyTickSpotPassword";   protected string BuildUrl(string apiMethod, string additionalParams) { if (projectDomain.Contains("DOMAIN")) { throw new ApplicationException("You must update your domain in ProjectRepository.cs."); } if (authParams.Contains("MyTickSpotPassword")) { throw new ApplicationException("You must update your email and password in ProjectRepository.cs."); } return string.Format("https://{0}/api/{1}?{2}&{3}", projectDomain, apiMethod, authParams, additionalParams); } Thats it!  Now go forth and consume XML and map it to classes you actually want to work with.  Have fun! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Graphical driver 13.10 ATI RV630

    - by Michael Cephalus
    I started updating the distro from 13.04 to 13.10. Then I got my hands on a Radeon HD 2600. I installed the RV630 compatible Catalystdriver from the official webpage. Then xserver crashed everytime I opened a browser or vlc fx. I took notice that there was no driver listed in configuration underneath. michael@statubtunu:~$ lshw -c video WARNING: you should run this program as super-user. *-display UNCLAIMED description: VGA compatible controller product: RV630 PRO [Radeon HD 2600 PRO] vendor: Advanced Micro Devices, Inc. [AMD/ATI] physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: vga_controller bus_master cap_list configuration: latency=0 resources: memory:d0000000-dfffffff memory:e0500000-e050ffff ioport:1000(size=256) memory:e0000000-e001ffff i installed additional drivers from jockey and the ubuntu softwarecenter ati-driver. though that only made it to crash xserver completely and when i type: michael@statubtunu:~$ sudo startx X.Org X Server 1.14.3 Release Date: 2013-09-12 X Protocol Version 11, Revision 0 Build Operating System: Linux 3.2.0-37-generic i686 Ubuntu Current Operating System: Linux statubtunu 3.11.0-13-generic #20-Ubuntu SMP Wed Oct 23 17:26:33 UTC 2013 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.11.0-13-generic root=UUID=8fb2e395-0ea2-4f45-ac66-225696b7ce2c ro quiet splash vt.handoff=7 Build Date: 15 October 2013 09:23:29AM xorg-server 2:1.14.3-3ubuntu2 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.30.2 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Nov 12 18:50:02 2013 (==) Using system config directory "/usr/share/X11/xorg.conf.d" Initializing built-in extension Generic Event Extension Initializing built-in extension SHAPE Initializing built-in extension MIT-SHM Initializing built-in extension XInputExtension Initializing built-in extension XTEST Initializing built-in extension BIG-REQUESTS Initializing built-in extension SYNC Initializing built-in extension XKEYBOARD Initializing built-in extension XC-MISC Initializing built-in extension SECURITY Initializing built-in extension XINERAMA Initializing built-in extension XFIXES Initializing built-in extension RENDER Initializing built-in extension RANDR Initializing built-in extension COMPOSITE Initializing built-in extension DAMAGE Initializing built-in extension MIT-SCREEN-SAVER Initializing built-in extension DOUBLE-BUFFER Initializing built-in extension RECORD Initializing built-in extension DPMS Initializing built-in extension X-Resource Initializing built-in extension XVideo Initializing built-in extension XVideo-MotionCompensation Initializing built-in extension SELinux Initializing built-in extension XFree86-VidModeExtension Initializing built-in extension XFree86-DGA Initializing built-in extension XFree86-DRI Initializing built-in extension DRI2 Loading extension GLX ERROR: could not insert 'fglrx': No such device (II) [KMS] drm report modesetting isn't supported. (EE) (EE) Backtrace: (EE) 0: /usr/bin/X (xorg_backtrace+0x49) [0xb77780b9] (EE) 1: /usr/bin/X (0xb75d8000+0x1a3e24) [0xb777be24] (EE) 2: (vdso) (__kernel_rt_sigreturn+0x0) [0xb75b540c] (EE) 3: /usr/bin/X (xf86findOption+0x2a) [0xb7681daa] (EE) 4: /usr/bin/X (xf86findOptionValue+0x23) [0xb7681f43] (EE) 5: /usr/bin/X (0xb75d8000+0x7ebfd) [0xb7656bfd] (EE) 6: /usr/bin/X (xf86ProcessOptions+0x37) [0xb7657507] (EE) 7: /usr/lib/xorg/modules/libvbe.so (vbeDoEDID+0xe7) [0xb5eb8647] (EE) 8: /usr/lib/xorg/modules/drivers/vesa_drv.so (0xb5ee7000+0x287c) [0xb5ee987c] (EE) 9: /usr/bin/X (InitOutput+0xb23) [0xb7659c33] (EE) 10: /usr/bin/X (0xb75d8000+0x2a30b) [0xb760230b] (EE) 11: /lib/i386-linux-gnu/libc.so.6 (__libc_start_main+0xf5) [0xb71ba905] (EE) 12: /usr/bin/X (0xb75d8000+0x2a908) [0xb7602908] (EE) (EE) Segmentation fault at address 0x5 (EE) Fatal server error: (EE) Caught signal 11 (Segmentation fault). Server aborting (EE) (EE) Please consult the The X.Org Foundation support at for help. (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information. (EE) (EE) Server terminated with error (1). Closing log file. This is what comes, but no GUI. Is there any way to deal with this?

    Read the article

  • How to completely remove a package that didn't install properly?

    - by chtfn
    I recently installed a package from a ppa on Ubuntu 12.04 beta, but apparently it didn't work out, and now it is giving me errors at every update or install I make, even after deactivating the ppa from my sources. This is what I get when I try uninstalling from USC: installArchives() failed: (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 295120 files and directories currently installed.) Removing oracle-java7-installer ... update-alternatives: error: unknown argument `cdrom' dpkg: error processing oracle-java7-installer (--remove): subprocess installed pre-removal script returned error exit status 2 No apport report written because MaxReports is reached already Downloading... --2012-04-12 13:13:21-- http://download.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-i586.tar.gz Rsolution de download.oracle.com (download.oracle.com)... 203.13.161.233, 203.13.161.234 Connexion vers download.oracle.com (download.oracle.com)|203.13.161.233|:80... connect. requte HTTP transmise, en attente de la rponse... 302 Moved Temporarily Emplacement: https://edelivery.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-i586.tar.gz [suivant] --2012-04-12 13:13:21-- https://edelivery.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-i586.tar.gz Rsolution de edelivery.oracle.com (edelivery.oracle.com)... 173.223.150.174 Connexion vers edelivery.oracle.com (edelivery.oracle.com)|173.223.150.174|:443... connect. requte HTTP transmise, en attente de la rponse... 302 Moved Temporarily Emplacement: http://download.oracle.com/errors/download-fail-1505220.html [suivant] --2012-04-12 13:13:22-- http://download.oracle.com/errors/download-fail-1505220.html Connexion vers download.oracle.com (download.oracle.com)|203.13.161.233|:80... connect. requte HTTP transmise, en attente de la rponse... 200 OK Longueur: 5307 (5,2K) [text/html] Sauvegarde en : ./jdk-7u3-linux-i586.tar.gz 0K ..... 100% 4,94M=0,001s 2012-04-12 13:13:22 (4,94 MB/s) - ./jdk-7u3-linux-i586.tar.gz sauvegard [5307/5307] Download done. sha256sum mismatch jdk-7u3-linux-i586.tar.gz Oracle JDK 7 is NOT installed. dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: oracle-java7-installer Error in function: I also tried "remove completely" from synaptic but it doesn't work either. Thank you for your help in advance!

    Read the article

  • Cannot Mount USB 3.0 Hard Disk ?!!

    - by Tenken
    Hi, I have a USB 3.0 external hard disk which I am unable to mount. The entry appears in the "lsusb" command, but I do not exactly understand how to mount it. This is the output for my lsusb command. "ASMedia Technology Inc." is the USB 3.0 device. I would appreciate some help in mounting and accessing the hard disk. This the relevant output of my "lsusb -v" : Bus 009 Device 002: ID 174c:5106 ASMedia Technology Inc. Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x174c ASMedia Technology Inc. idProduct 0x5106 bcdDevice 0.01 iManufacturer 2 ASMedia iProduct 3 AS2105 iSerial 1 00000000000000000000 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 3 bMaxPacketSize0 9 idVendor 0x1d6b Linux Foundation idProduct 0x0003 3.0 root hub bcdDevice 2.06 iManufacturer 3 Linux 2.6.35-28-generic xhci_hcd iProduct 2 xHCI Host Controller iSerial 1 0000:04:00.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x0009 Per-port power switching Per-port overcurrent protection TT think time 8 FS bits bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0503 highspeed power enable connect Port 4: 0000.0503 highspeed power enable connect Device Status: 0x0003 Self Powered Remote Wakeup Enabled This is the error given when I try to mount the hard drive: shinso@shinso-IdeaPad:~$ sudo mount /dev/sdb /mnt [sudo] password for shinso: mount: /dev/sdb: unknown device This the output of "dmesg|tail": [30062.774178] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [30535.800977] usb 9-4: USB disconnect, address 3 [30659.237342] Valid eCryptfs headers not found in file header region or xattr region [30659.237351] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [31259.268310] Valid eCryptfs headers not found in file header region or xattr region [31259.268313] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [31860.059058] Valid eCryptfs headers not found in file header region or xattr region [31860.059062] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO [32465.220590] Valid eCryptfs headers not found in file header region or xattr region [32465.220593] Either the lower file is not in a valid eCryptfs format, or the key could not be retrieved. Plaintext passthrough mode is not enabled; returning -EIO I am using Ubuntu 10.10 (64 bit). Any help is appreciated.

    Read the article

  • Failed to install GRUB on a separate '/boot' partition on a fake RAID 0 (12.04LTS)

    - by gerben
    I'm having some problems getting GRUB configured for Ubuntu 12.04LTS on a fake RAID 0. I can either get the GRUB rescue prompt at startup, or just a GRUB prompt but I cannot boot to Ubuntu manually. How can I configure the GRUB to actually use the Ubuntu install? The steps taken: Installing Ubuntu on fake raid The Ubuntu installer cannot install Ubuntu on the drive. After defining the partitions to use it fails with "Error: ???", pressing OK terminates the installer. Therefore, I used GParted to configure the partitions: /dev/mapper/sil_agadaccfacbg : (the RAID configuration, created partition): /dev/mapper/sil_agadaccfacbg1:ext2, 200MiB, (with 'boot' flag) /dev/mapper/sil_agadaccfacbg3:ext2, 67.75GiB, (which will contain Ubuntu) /dev/mapper/sil_agadaccfacbg2:extended, 1.00GiB, (for swap) Contains: /dev/mapper/sil_agadaccfacbg5: unknown Because of the fake-RAID, I already mounted the destination partitions before running the Ubuntu installer: > mkdir /mnt/boot > sudo mount /dev/mapper/sil_agadaccfacbg1 /mnt/boot > mkdir /mnt/ubuntu > sudo mount /dev/mapper/sil_agadaccfacbg3 /mnt/ubuntu In the installer I chose the following partition usage: /dev/mapper/sil_agadaccfacbg1 ext2, mount at /boot (209MB) /dev/mapper/sil_agadaccfacbg3 ext2, mount at / (72751MB) /dev/mapper/sil_agadaccfacbg5 swap Device for boot loader installation: /dev/mapper/sil_agadaccfacbg, linux device-mapper (striped) (74.0GB) This will install Ubuntu, but will fail to install GRUB (it seems to use /dev/sda no matter which one I choose) Installing GRUB with dpkg-reconfigure I followed this guide, but adapted it for two partitions: sudo mount /dev/mapper/sil_agadaccfacbg3 /mnt/ubuntu sudo mount --bind /dev /mnt/ubuntu/dev sudo mount --bind /proc /mnt/ubuntu/proc sudo mount --bind /sys /mnt/ubuntu/sys sudo mount /dev/mapper/sil_agadaccfacbg1 /mnt/boot sudo mount --bind /boot /mnt/boot sudo chroot /mnt/ubuntu dpkg-reconfigure grub-pc However, it does not ask where to install GRUB (I should choose /dev/mapper/sil_agadaccfacbg somewhere..) After reboot I get the GRUB rescue prompt with message no such device Installing GRUB with grub-install After the same mount commands as above, I continued with: > sudo grub-install --root-directory=/mnt/boot /dev/mapper/sil_agadaccfacbg This gives the following message: /usr/sbin/grub-probe: error: cannot find a device for /mnt/boot/boot/grub (is /dev mounted?) It does succeed when mounting just the boot partition : sudo mount /dev/mapper/sil_agadaccfacbg1 /mnt sudo grub-install --root-directory=/mnt/ /dev/mapper/sil_agadaccfacbg This finishes with: Installation finished. No error reported. After reboot I get the GRUB console, with welcome text. Attempting to manually start Ubuntu: ls (hd0) (hd0,msdos3) : (Ubuntu install partition) (hd0,msdos1) : (Ubuntu boot partition) (hd1) (hd1,msdos1) : (Ubuntu live USB) ls (hd0,msdos3)/ contains: - vmlinuz - lib/ - tmp/ - initrd.img - mnt/ - var/ - proc/ - boot/ - root/ - etc/ - run/ - media/ - sbin/ - bin/ - selinux/ - dev/ - srv/ - home/ - sys/ ls (hd0,msdos1)/ contains: -grub/ -boot/ -initrd.img-3.8.0-29-generic -vmlinuz-3.8.0.29-generic -config-3.8 linux (hd0,msdos3)/vmlinuz This returns "error: out of disk" Installing GRUB on Ubuntu partition with grub-install > sudo mount /dev/mapper/sil_agadaccfacbg3 /mnt > sudo grub-install --root-directory=/mnt/ /dev/mapper/sil_agadaccfacbg This finishes with message: > Installation finished. No error reported. After reboot get the message "error: out of disk" and the GRUB rescue prompt. Configuring GRUB with grub-mkconfig Attempting to run grub-mkconfig with different destinations results in the same message: /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?). Remarks: Initially I didn't use a separate /boot partition, but the GRUB install then also failed. Because some mention that a small partition at the beginning of the drive is necessary on old machines, I retried with a /boot partition This is a single boot (no other OS's installed/used)

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • What is the best server or Ip address to use for prolonged testing?

    - by eldorel
    I usually run uptime/latency tests against (and from) two servers that we own at different sites and until recently I've used the google dns servers as a control group. However, I've realized there is a potential problem with monitoring latency over extended periods of time. Almost all of the major service providers are using ANYCAST. For short tests this doesn't matter, but I need to run a set of tests for at least a week to try and catch an intermittent problem, and a change in the anycast priority while trying to test latency will cause the latency values for that server to change accordingly. Since I'm submitting graphs of this data to the ISP, I need to avoid/account for as many variables as possible. Spikes in the data for only one of the tested servers will only cause headaches. So can anyone recommend servers that: are not using anycast are owned by an entity that has a good uptime reputation (so they can't claim that the problem is server-side) will respond to ICMP requests Have an available service that runs on TCP/UDP (http or dns preferably) Wont consider an automated request every 10 minutes to be abuse Are accessible from anywhere in the world Are not local to the isp ( consider this an investigation of a hostile party ) Thanks in advance. Edit: added #6 and #7 above. More info: I am attempting to demonstrate a network problem for an entire node of our local ISP's network. They are actively blaming the issue on the equipment installed at the customer sites (our backup site is one of these), and refuse to escalate the problem. (even though 2 of these businesses have ISP provided modems, and all of us have completely different routers/services running) I am already quite familiar with the need to test an isp controlled IP, but they are actively dropping all packets targeted at gateway ip addresses and are only passing traffic addressed beyond the gateways. So to demonstrate the issue, I am sending packets to other systems in the same node, systems one hop away from the affected node, and systems completely outside the network. Unfortunately, all of the systems I have currently are either administered directly by myself, or by people who are biased enough to assist me. I need to have several systems included in the trace/log/graphs that are 100% not in the control of either myself or the isp so that the graphs have a stable/unbiased control group. These requirements are straight from legal, I'm just trying to make sure that everything that could be argued to invalidate the data is already covered. In Summary: I need to be able to show tcp/udp/icmp as 3 separate data points, and I need to be able to show the connections inside the local node, from local node to another nearby node, from those 2 nodes to the internet, and through the internet to both verifiable servers and a control group that I have no control over whatsoever. Again, Google/opendns/yahoo/msn/facebook/etc all use anycast, which throws the numbers off every time the anycast caches expire, so I need suggestions of an IP or server that is available for this type of testing. I was hoping someone knew of a system run by someone such as ISC or ICANN, or perhaps even a .gov server (fcc or nsa maybe?) setup for this type of testing. Thanks again.

    Read the article

  • X server with nvidia driver crashing: 12.04

    - by Raster
    My X server consistently crashes. It seems like this is happening when the X server is idle. This behaviour is new with 12.04. This is only happening on the second display of a multiseat system. Is there a configuration change I can make to stop this? X.Org X Server 1.11.3 Release Date: 2011-12-16 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.42-26-generic x86_64 Ubuntu Current Operating System: Linux Desktop 3.2.0-29-generic #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 x86_64 Kernel command line: BOOT_IMAGE=/vmlinuz-3.2.0-29-generic root=/dev/mapper/Group1-Root ro ramdisk_size=512000 quiet splash vt.handoff=7 Build Date: 04 August 2012 01:51:23AM xorg-server 2:1.11.4-0ubuntu10.7 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.24.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.1.log", Time: Sun Sep 2 22:37:06 2012 (==) Using config file: "/etc/X11/xorg.conf" (==) Using system config directory "/usr/share/X11/xorg.conf.d" Backtrace: 0: /usr/bin/X (xorg_backtrace+0x26) [0x7fcee86be846] 1: /usr/bin/X (0x7fcee8536000+0x18c6ea) [0x7fcee86c26ea] 2: /lib/x86_64-linux-gnu/libpthread.so.0 (0x7fcee785c000+0xfcb0) [0x7fcee786bcb0] 3: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so (0x7fcee14ba000+0x902a9) [0x7fcee154a2a9] 4: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so (0x7fcee14ba000+0xfd5e7) [0x7fcee15b75e7] 5: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so (0x7fcee14ba000+0x4d6b92) [0x7fcee1990b92] 6: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so (0x7fcee14ba000+0x4d74d5) [0x7fcee19914d5] 7: /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so (0x7fcee14ba000+0x4d767d) [0x7fcee199167d] 8: /usr/bin/X (0x7fcee8536000+0x1196ec) [0x7fcee864f6ec] 9: /usr/bin/X (0x7fcee8536000+0xe8ad5) [0x7fcee861ead5] 10: /usr/bin/X (0x7fcee8536000+0xe9d45) [0x7fcee861fd45] /etc/X11/xorg.conf Section "ServerLayout" Identifier "Desktop" Screen 0 "DesktopScreen" 0 0 InputDevice "DesktopMouse" "CorePointer" InputDevice "DesktopKeyboard" "CoreKeyboard" Option "AutoAddDevices" "false" Option "AllowEmptyInput" "true" Option "AutoEnableDevices" "false" EndSection Section "ServerLayout" Identifier "Desktop2" Screen 1 "Desktop2Screen" 0 0 InputDevice "Desktop2Mouse" "CorePointer" InputDevice "Desktop2Keyboard" "CoreKeyboard" Option "AutoAddDevices" "false" Option "AllowEmptyInput" "true" Option "AutoEnableDevices" "false" EndSection Section "Module" Load "dbe" Load "extmod" Load "type1" Load "freetype" Load "glx" EndSection Section "Files" EndSection Section "ServerFlags" Option "AutoAddDevices" "false" Option "AutoEnableDevices" "false" Option "AllowMouseOpenFail" "on" Option "AllowEmptyInput" "on" Option "ZapWarning" "on" Option "HandleSepcialKeys" "off" # Zapping on Option "DRI2" "on" Option "Xinerama" "0" EndSection # Desktop Mouse Section "InputDevice" Identifier "DesktopMouse" Driver "evdev" Option "Device" "/dev/input/event3" Option "Protocol" "auto" Option "GrabDevice" "on" Option "Emulate3Buttons" "no" Option "Buttons" "5" Option "ZAxisMapping" "4 5" Option "SendCoreEvents" "true" EndSection # Desktop2 Mouse Section "InputDevice" Identifier "Desktop2Mouse" Driver "evdev" Option "Device" "/dev/input/event5" Option "Protocol" "auto" Option "GrabDevice" "on" Option "Emulate3Buttons" "no" Option "Buttons" "5" Option "ZAxisMapping" "4 5" Option "SendCoreEvents" "true" EndSection Section "InputDevice" Identifier "DesktopKeyboard" Driver "evdev" Option "Device" "/dev/input/event4" Option "XkbRules" "xorg" Option "XkbModel" "105" Option "XkbLayout" "us" Option "Protocol" "Standard" Option "GrabDevice" "on" EndSection Section "InputDevice" Identifier "Desktop2Keyboard" Driver "evdev" Option "Device" "/dev/input/event6" Option "XkbRules" "xorg" Option "XkbModel" "105" Option "XkbLayout" "us" Option "Protocol" "Standard" Option "GrabDevice" "on" EndSection Section "Monitor" Identifier "Desktop2Monitor" VendorName "Acer" ModelName "Acer G235H" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Monitor" Identifier "DesktopMonitor" VendorName "Acer" ModelName "Acer H213H" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "EVGACard" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 560 Ti" Option "Coolbits" "1" BusID "PCI:2:0:0" Screen 0 EndSection Section "Device" Identifier "XFXCard" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 9800" Option "Coolbits" "1" BusID "PCI:5:0:0" Screen 0 EndSection Section "Screen" Identifier "DesktopScreen" Device "EVGACard" Monitor "DesktopMonitor" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "Desktop2Screen" Device "XFXCard" Monitor "Desktop2Monitor" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • REST to Objects in C#

    RESTful interfaces for web services are all the rage for many Web 2.0 sites.  If you want to consume these in a very simple fashion, LINQ to XML can do the job pretty easily in C#.  If you go searching for help on this, youll find a lot of incomplete solutions and fairly large toolkits and frameworks (guess how I know this) this quick article is meant to be a no fluff just stuff approach to making this work. POCO Objects Lets assume you have a Model that you want to suck data into from a RESTful web service.  Ideally this is a Plain Old CLR Object, meaning it isnt infected with any persistence or serialization goop.  It might look something like this: public class Entry { public int Id; public int UserId; public DateTime Date; public float Hours; public string Notes; public bool Billable;   public override string ToString() { return String.Format("[{0}] User: {1} Date: {2} Hours: {3} Notes: {4} Billable {5}", Id, UserId, Date, Hours, Notes, Billable); } } Not that this isnt a completely trivial object.  Lets look at the API for the service.  RESTful HTTP Service In this case, its TickSpots API, with the following sample output: <?xml version="1.0" encoding="UTF-8"?> <entries type="array"> <entry> <id type="integer">24</id> <task_id type="integer">14</task_id> <user_id type="integer">3</user_id> <date type="date">2008-03-08</date> <hours type="float">1.00</hours> <notes>Had trouble with tribbles.</notes> <billable>true</billable> # Billable is an attribute inherited from the task <billed>true</billed> # Billed is an attribute to track whether the entry has been invoiced <created_at type="datetime">Tue, 07 Oct 2008 14:46:16 -0400</created_at> <updated_at type="datetime">Tue, 07 Oct 2008 14:46:16 -0400</updated_at> # The following attributes are derived and provided for informational purposes: <user_email>[email protected]</user_email> <task_name>Remove converter assembly</task_name> <sum_hours type="float">2.00</sum_hours> <budget type="float">10.00</budget> <project_name>Realign dilithium crystals</project_name> <client_name>Starfleet Command</client_name> </entry> </entries> Im assuming in this case that I dont necessarily care about all of the data fields the service is returning I just need some of them for my applications purposes.  Thus, you can see there are more elements in the <entry> XML than I have in my Entry class. Get The XML with C# The next step is to get the XML.  The following snippet does the heavy lifting once you pass it the appropriate URL: protected XElement GetResponse(string uri) { var request = WebRequest.Create(uri) as HttpWebRequest; request.UserAgent = ".NET Sample"; request.KeepAlive = false;   request.Timeout = 15 * 1000;   var response = request.GetResponse() as HttpWebResponse;   if (request.HaveResponse == true && response != null) { var reader = new StreamReader(response.GetResponseStream()); return XElement.Parse(reader.ReadToEnd()); } throw new Exception("Error fetching data."); } This is adapted from the Yahoo Developer article on Web Service REST calls.  Once you have the XML, the last step is to get the data back as your POCO. Use LINQ-To-XML to Deserialize POCOs from XML This is done via the following code: public IEnumerable<Entry> List(DateTime startDate, DateTime endDate) { string additionalParameters = String.Format("start_date={0}&end_date={1}", startDate.ToShortDateString(), endDate.ToShortDateString()); string uri = BuildUrl("entries", additionalParameters);   XElement elements = GetResponse(uri);   var entries = from e in elements.Elements() where e.Name.LocalName == "entry" select new Entry { Id = int.Parse(e.Element("id").Value), UserId = int.Parse(e.Element("user_id").Value), Date = DateTime.Parse(e.Element("date").Value), Hours = float.Parse(e.Element("hours").Value), Notes = e.Element("notes").Value, Billable = bool.Parse(e.Element("billable").Value) }; return entries; }   For completeness, heres the BuildUrl method for my TickSpot API wrapper: // Change these to your settings protected const string projectDomain = "DOMAIN.tickspot.com"; private const string authParams = "[email protected]&password=MyTickSpotPassword";   protected string BuildUrl(string apiMethod, string additionalParams) { if (projectDomain.Contains("DOMAIN")) { throw new ApplicationException("You must update your domain in ProjectRepository.cs."); } if (authParams.Contains("MyTickSpotPassword")) { throw new ApplicationException("You must update your email and password in ProjectRepository.cs."); } return string.Format("https://{0}/api/{1}?{2}&{3}", projectDomain, apiMethod, authParams, additionalParams); } Thats it!  Now go forth and consume XML and map it to classes you actually want to work with.  Have fun! Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • what is best config for nginx worker_rlimit_nofile and worker_connections 28672

    - by Binh Nguyen
    i have issue of web-brower response ( especially on ie ) very slow, some time time out, and sometime hang out up to 20 seconds for one file redirect 301 when test with "f12 derverloper tool of ie" .. it report wait/start time very long. but after got connected the elements on web weill be dowload and show out fast ( test at xaluan.com ) It most happen when active user on web more than 2100 ( use google real time live analytic ). server running cenos 5 with ngix, apache, 32core cpu, 96G ram, raid 10 sas hdd.. == flowing is my config == user nobody; # no need for more workers in the proxy mode worker_processes 28; #old 32 #good at 24 error_log /var/log/nginx/error.log; #old add in end: info worker_rlimit_nofile 22528; events { worker_connections 22528; use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks off; sendfile on; tcp_nopush on; tcp_nodelay on; server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks off; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 25; #old 5 gzip on; #old on gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; ignore_invalid_headers on; client_header_timeout 1m; #3m client_body_timeout 1m; #3m send_timeout 1m; #3m reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 100M; client_body_buffer_size 256k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; limit_conn_zone $binary_remote_addr zone=limit_per_ip:1m; limit_conn limit_per_ip 20; limit_req_zone $binary_remote_addr zone=allips:5m rate=200r/s; limit_req zone=allips burst=200 nodelay; include "/etc/nginx/vhosts/*"; } =========== I have play around with worker config 1- tried increase as some one suggess: worker_rlimit_nofile = worker_connections = worker_processes * 1024 = 32768 2- tried to set low: worker_processes = 28 and other worker at 22582 and other solution too .. but not work cause some time it make server load hight very quick 3- tried to comment out the # worker_rlimit_nofile . so it will be unlimited. it look like solved a bit about issue response time. but it also make server high load quick in peak time... Please help thanks PS: other apache you may have look for help me out thanks Listen 0.0.0.0:8081 User nobody Group nobody ExtendedStatus On ServerAdmin [email protected] ServerName server.xaluan.com LogLevel warn # These can be set in WHM under 'Apache Global Configuration' Timeout 100 TraceEnable Off ServerSignature Off ServerTokens ProductOnly FileETag None StartServers 15 <IfModule prefork.c> MinSpareServers 20 MaxSpareServers 50 #MaxSpareServers 40 </IfModule> ServerLimit 1572 MaxClients 1572 MaxRequestsPerChild 4000 # MaxRequestsPerChild 3000 KeepAlive On KeepAliveTimeout 3 MaxKeepAliveRequests 300 #MaxKeepAliveRequests 130

    Read the article

  • Managing Operational Risk of Financial Services Processes – part 2/2

    - by Sanjeev Sharma
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} In my earlier blog post, I had described the factors that lead to compliance complexity of financial services processes. In this post, I will outline the business implications of the increasing process compliance complexity and the specific role of BPM in addressing the operational risk reduction objectives of regulatory compliance. First, let’s look at the business implications of increasing complexity of process compliance for financial institutions: · Increased time and cost of compliance due to duplication of effort in conforming to regulatory requirements due to process changes driven by evolving regulatory mandates, shifting business priorities or internal/external audit requirements · Delays in audit reporting due to quality issues in reconciling non-standard process KPIs and integrity concerns arising from the need to rely on multiple data sources for a given process Next, let’s consider some approaches to managing the operational risk of business processes. Financial institutions considering reducing operational risk of their processes, generally speaking, have two choices: · Rip-and-replace existing applications with new off-the shelf applications. · Extend capabilities of existing applications by modeling their data and process interactions, with other applications or user-channels, outside of the application boundary using BPM. The benefit of the first approach is that compliance with new regulatory requirements would be embedded within the boundaries of these applications. However pre-built compliance of any packaged application or custom-built application should not be mistaken as a one-shot fix for future compliance needs. The reason is that business needs and regulatory requirements inevitably out grow end-to-end capabilities of even the most comprehensive packaged or custom-built business application. Thus, processes that originally resided within the application will eventually spill outside the application boundary. It is precisely at such hand-offs between applications or between overlaying processes where vulnerabilities arise to unknown and accidental faults that potentially result in errors and lead to partial or total failure. The gist of the above argument is that processes which reside outside application boundaries, in other words, span multiple applications constitute a latent operational risk that spans the end-to-end value chain. For instance, distortion of data flowing from an account-opening application to a credit-rating system if left un-checked renders compliance with “KYC” policies void even when the “KYC” checklist was enforced at the time of data capture by the account-opening application. Oracle Business Process Management is enabling financial institutions to lower operational risk of such process ”gaps” for Financial Services processes including “Customer On-boarding”, “Quote-to-Contract”, “Deposit/Loan Origination”, “Trade Exceptions”, “Interest Claim Tracking” etc.. If you are faced with a similar challenge and need any guidance on the same feel free to drop me a note.

    Read the article

  • I am not able to delete a corrupt NTFS partition on my pen drive. How can I force its deletion?

    - by yesuraj
    I formatted my 16GB pen drive with the NTFS file system in windows vista. After that I started copying some files. However, only a few files were copied to the pen drive before the copy operation hung. So I cancelled the copy operation. Now I am unable to use the pen drive. I DON'T REALLY NEED ANY FILES THAT I COPIED TO THE PENDRIVE. I JUST WANT TO USE THE PENDRIVE AGAIN. I have tried using Ubuntu to format the pen drive. But when i use fdisk to delete the partition, it looks like it is working fine but in fact it does not delete the partition. Also I am unable to format it with any other file system. When I tried to use gparted, it throws the following error: Error mounting: mount exited with exit code 14: The disk contains an unclean file system(0,0). The file system wasn't safely closed on window. Fixing ntfs_attr_pread_i:ntfs_pread failed: Input/output error Failed to read NTFS$Bitmap:Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a softRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into windows twice. The usage of the /f parameter is very important!. If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the dmraid documentation for more details When I searched the Internet I found help on how to recover. But I don’t want to recover, I want to format it again. When I pressed w after deleting the partition, it took more time than previously. After that i removed the pen drive and re-inserted, but the partition I had deleted was still present. If I simply type the command fdisk /dev/sdb without removing the pen drive after the partition is deleted, then it returns the error message Unable to open /dev/sdb. Here are the steps that I followed: root@yesuraj-ubuntu:~# fdisk /dev/sdb Command (m for help): d Selected partition 1 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks THE DEMESG PRINTS ARE AS FOLLOWS, [ 6139.774753] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6154.816941] usb 2-1.3: device descriptor read/64, error -110 [ 6169.968908] usb 2-1.3: device descriptor read/64, error -110 [ 6170.158427] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6185.200638] usb 2-1.3: device descriptor read/64, error -110 [ 6200.352572] usb 2-1.3: device descriptor read/64, error -110 [ 6200.542093] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6205.559460] usb 2-1.3: device descriptor read/8, error -110 I used the dd command and it erased the partition table. But now when I connect the pen drive, dmesg contains this error message: [88143.437001] sdb: unknown partition table. I am not able to create a partion using fdisk /dev/sdb. The error message says that it is unable to find the node. Other messages from dmesg follow below. [87100.531596] usb 2-1.3: new high speed USB device number 39 using ehci_hcd [87130.915257] usb 2-1.3: new high speed USB device number 40 using ehci_hcd [87135.932647] usb 2-1.3: device descriptor read/8, error -110

    Read the article

  • Delight and Excite

    - by Applications User Experience
    Mick McGee, CEO & President, EchoUser Editor’s Note: EchoUser is a User Experience design firm in San Francisco and a member of the Oracle Usability Advisory Board. Mick and his staff regularly consult on Oracle Applications UX projects. Being part of a user experience design firm, we have the luxury of working with a lot of great people across many great companies. We get to help people solve their problems.  At least we used to. The basic design challenge is still the same; however, the goal is not necessarily to solve “problems” anymore; it is, “I want our products to delight and excite!” The question for us as UX professionals is how to design to those goals, and then how to assess them from a usability perspective. I’m not sure where I first heard “delight and excite” (A book? blog post? Facebook  status? Steve Jobs quote?), but now I hear these listed as user experience goals all the time. In particular, somewhat paradoxically, I routinely hear them in enterprise software conversations. And when asking these same enterprise companies what will make the project successful, we very often hear, “Make it like Apple.” In past days, it was “make it like Yahoo (or Amazon or Google“) but now Apple is the common benchmark. Steve Jobs and Apple were not secrets, but with Jobs’ passing and Apple becoming the world’s most valuable company in the last year, the impact of great design and experience is suddenly very widespread. In particular, users’ expectations have gone way up. Being an enterprise company is no shield to the general expectations that users now have, for all products. Designing a “Minimum Viable Product” The user experience challenge has historically been, to echo the words of Eric Ries (author of Lean Startup) , to create a “minimum viable product”: the proverbial, “make it good enough”. But, in our profession, the “minimum viable” part of that phrase has oftentimes, unfortunately, referred to the design and user experience. Technology typically dominated the focus of the biggest, most successful companies. Few have had the laser focus of Apple to also create and sell design and user experience alongside great technology. But now that Apple is the most valuable company in the world, copying their success is a common undertaking. Great design is now a premium offering that everyone wants, from the one-person startup to the largest companies, consumer and enterprise. This emerging business paradigm will have significant impact across the user experience design process and profession. One area that particularly interests me is, how are we going to evaluate these new emerging “delight and excite” experiences, which are further customized to each particular domain? How to Measure “Delight and Excite” Traditional usability measures of task completion rate, assists, time, and errors are still extremely useful in many situations; however, they are too blunt to offer much insight into emerging experiences “Satisfaction” is usually assessed in user testing, in roughly equivalent importance to the above objective metrics. Various surveys and scales have provided ways to measure satisfying UX, with whatever questions they include. However, to meet the demands of new business goals and keep users at the center of design and development processes, we have to explore new methods to better capture custom-experience goals and emotion-driven user responses. We have had success assessing custom experiences, including “delight and excite”, by employing a variety of user testing methods that tend to combine formative and summative techniques (formative being focused more on identifying usability issues and ways to improve design, and summative focused more on metrics). Our most successful tool has been one we’ve been using for a long time, Magnitude Estimation Technique (MET). But it’s not necessarily about MET as a measure, rather how it is created. Caption: For one client, EchoUser did two rounds of testing.  Each test was a mix of performing representative tasks and gathering qualitative impressions. Each user participated in an in-person moderated 1-on-1 session for 1 hour, using a testing set-up where they held the phone. The primary goal was to identify usability issues and recommend design improvements. MET is based on a definition of the desired experience, which users will then use to rate items of interest (usually tasks in a usability test). In other words, a custom experience definition needs to be created. This can then be used to measure satisfaction in accomplishing tasks; “delight and excite”; or anything else from strategic goals, user demands, or elsewhere. For reference, our standard MET definition in usability testing is: “User experience is your perception of how easy to use, well designed and productive an interface is to complete tasks.” Articulating the User Experience We’ve helped construct experience definitions for several clients to better match their business goals. One example is a modification of the above that was needed for a company that makes medical-related products: “User experience is your perception of how easy to use, well-designed, productive and safe an interface is for conducting tasks. ‘Safe’ is how free an environment (including devices, software, facilities, people, etc.) is from danger, risk, and injury.” Another example is from a company that is pushing hard to incorporate “delight” into their enterprise business line: “User experience is your perception of a product’s ease of use and learning, satisfaction and delight in design, and ability to accomplish objectives.” I find the last one particularly compelling in that there is little that identifies the experience as being for a highly technical enterprise application. That definition could easily be applied to any number of consumer products. We have gone further than the above, including “sexy” and “cool” where decision-makers insisted they were part of the desired experience. We also applied it to completely different experiences where the “interface” was, for example, riding public transit, the “tasks” were train rides, and we followed the participants through the train-riding journey and rated various aspects accordingly: “A good public transportation experience is a cost-effective way of reliably, conveniently, and safely getting me to my intended destination on time.” To construct these definitions, we’ve employed both bottom-up and top-down approaches, depending on circumstances. For bottom-up, user inputs help dictate the terms that best fit the desired experience (usually by way of cluster and factor analysis). Top-down depends on strategic, visionary goals expressed by upper management that we then attempt to integrate into product development (e.g., “delight and excite”). We like a combination of both approaches to push the innovation envelope, but still be mindful of current user concerns. Hopefully the idea of crafting your own custom experience, and a way to measure it, can provide you with some ideas how you can adapt your user experience needs to whatever company you are in. Whether product-development or service-oriented, nearly every company is ultimately providing a user experience. The Bottom Line Creating great experiences may have been popularized by Steve Jobs and Apple, but I’ll be honest, it’s a good feeling to be moving from “good enough” to “delight and excite,” despite the challenge that entails. In fact, it’s because of that challenge that we will expand what we do as UX professionals to help deliver and assess those experiences. I’m excited to see how we, Oracle, and the rest of the industry will live up to that challenge.

    Read the article

  • Moving abroad - Relocation advice

    - by Tim Koekkoek
    Oracle offers graduates from different European countries the opportunity to start their career abroad. Some already have experience with living abroad as they have done an exchange semester or internship in another county, for others it is the first time they will move abroad. Rui started in October 2011 as a Business Development Consultant in Dublin and moved from Portugal to Dublin, Ireland to start his career. For those planning to leave their home country and who desire to work abroad, he will share some tips and tricks in this article. When you’re faced with an opportunity like this, there are lots of things that will come to your mind. Sometimes it can be either very exciting, or even stressful. 1. First of all, try to relax. If you are certain you are moving abroad, all you need to do is some research about the country where you’re going to live, get to know its culture (gastronomy, important dates and events, its economy and effective ways to keep you in touch with your family and friends – such as mobile companies and Internet services), and start to understand the best locations (with good access) you could/should live in are. Don’t forget that initially you can be limited by transport and therefore it is important to explore the ideal places for you. During this time, Oracle provides everything you’ll need (papers, documents, etc.) to cross borders. 2. When you arrive, you understand that you are in a new country, in a new place, where all things (or most) are unknown to you. Before you panic, try to see it as a new challenge where new opportunities will come. Sometimes, it’s not easy I know, but the very best a new place has to give to you, is the opportunity to understand a new culture, get to know other people, other ways of working, and grow both as a person and professionally. So, you have nothing to lose in this kind of experiment. 3. When you arrive at Oracle, there’s a fantastic team that will help you with settling in, HR, Payroll, Relocation, IT. In my case, Oracle helped me with the relocation, they supported me to arrange everything such as helping out with all the paperwork and finding a new apartment. As you can see they will do their best to help you to be successful! 4. Engage with your new co-workers. Going to a place where you don’t know anyone can be tough sometimes but see it as an opportunity to meet people from all over the world and share experiences. Embrace it. 5. Plan ahead, try to get the most information possible and use it. Oracle is a multinational enterprise that will allow you to get to know a new labour market and give you the flexibility you need to understand your view of employment and occupation, giving you the very best opportunities to join different teams and working areas, so that you can work where you fit best. Good luck! If you’re thinking about starting a career abroad, read the following article: http://www.overseasdigest.com/movingtips.htm it can be very useful to you. Interested in starting your career at Oracle like Rui has? Please have a look at https://campus.oracle.com for all of our latest vacancies.

    Read the article

  • dns server bind is not work [closed]

    - by user1742080
    I just installed bind on RHEL 6 and point a domain to that server. but actually when i ping domain it returns error 1214: Here is my named.conf: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "mydomain.com"{ type master; file "/var/named/data/named.mydomain.com"; allow-update { none; }; }; AND The content of "/var/named/data/named.mydomain.com": 1 $TTL 38400 2 3 mydomain.com. IN SOA ns1.mydomain.com. milad.yahoo.com. ( 4 2012101201 ; serial number YYMMDDNN 5 28800 ; Refresh 6 7200 ; Retry 7 864000 ; Expire 8 38400 ; Min TTL 9 ) 10 11 mydomain.com. IN A 1.2.3.4 12 www IN A 1.2.3.4 13 ns1.mydomain.com. IN A 1.2.3.4 14 ns2.mydomain.com. IN A 1.2.3.4 15 mydomain.com. IN NS ns1.mydomain.com. 16 mydomain.com. IN NS ns2.mydomain.com. AND i'm sure the named service is running: [root@server ~]# service named status version: 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.3 CPUs found: 8 worker threads: 8 number of zones: 20 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is OFF recursive clients: 0/0/1000 tcp clients: 0/100 server is up and running named (pid 26299) is running...

    Read the article

  • SQL Server: Writing CASE expressions properly when NULLs are involved

    - by Mladen Prajdic
    We’ve all written a CASE expression (yes, it’s an expression and not a statement) or two every now and then. But did you know there are actually 2 formats you can write the CASE expression in? This actually bit me when I was trying to add some new functionality to an old stored procedure. In some rare cases the stored procedure just didn’t work correctly. After a quick look it turned out to be a CASE expression problem when dealing with NULLS. In the first format we make simple “equals to” comparisons to a value: SELECT CASE <value> WHEN <equals this value> THEN <return this> WHEN <equals this value> THEN <return this> -- ... more WHEN's here ELSE <return this> END Second format is much more flexible since it allows for complex conditions. USE THIS ONE! SELECT CASE WHEN <value> <compared to> <value> THEN <return this> WHEN <value> <compared to> <value> THEN <return this> -- ... more WHEN's here ELSE <return this> END Now that we know both formats and you know which to use (the second one if that hasn’t been clear enough) here’s an example how the first format WILL make your evaluation logic WRONG. Run the following code for different values of @i. Just comment out any 2 out of 3 “SELECT @i =” statements. DECLARE @i INTSELECT  @i = -1 -- first resultSELECT  @i = 55 -- second resultSELECT  @i = NULL -- third resultSELECT @i AS OriginalValue, -- first CASE format. DON'T USE THIS! CASE @i WHEN -1 THEN '-1' WHEN NULL THEN 'We have a NULL!' ELSE 'We landed in ELSE' END AS DontUseThisCaseFormatValue, -- second CASE format. USE THIS! CASE WHEN @i = -1 THEN '-1' WHEN @i IS NULL THEN 'We have a NULL!' ELSE 'We landed in ELSE' END AS UseThisCaseFormatValue When the value of @i is –1 everything works as expected, since both formats go into the –1 WHEN branch. When the value of @i is 55 everything again works as expected, since both formats go into the ELSE branch. When the value of @i is NULL the problems become evident. The first format doesn’t go into the WHEN NULL branch because it makes an equality comparison between two NULLs. Because a NULL is an unknown value: NULL = NULL is false. That is why the first format goes into the ELSE Branch but the second format correctly handles the proper IS NULL comparison.   Please use the second more explicit format. Your future self will be very grateful to you when he doesn’t have to discover these kinds of bugs.

    Read the article

  • 12.10 unable to install or even run from Live CD with nVidia GTX 580

    - by user99056
    I've used Ubuntu in the past (set up as web server, etc over in Iraq), so I'm not a 100% Linux Noob, however, I'm running into a brick wall here. I've got a machine I built when I got back to the US earlier this year, running Windows 7 Ultimate on it, and I've now got some free time and would like to transition over to Ubuntu full time. I've searched around in the forums, and there seems to be an issue with the nVidia graphics cards, so I've tried going to the EVGA site to see if I could find a new BIOS update for it and had no luck, so I'm back searching the forums here again and decided to just go ahead and post my question. My apologies if this is covered in another post and I was just unable to find it. I've found a few 'similar' posts, but nothing as bad as my issue. With the history aside, here is the actual detailed issue: I purchased a new SSD (Intel 520 SSD), arrived today, and I disconnect my old Windows 7 SSD. I had pre downloaded the ubuntu-12.10-desktop-amd64 earlier today and burned it to DVD. Upon inserting the Live CD into the computer and booting up, everything was fine up to the 'Run From Live CD' or 'Install Ubuntu Now' buttons. As I was sure I wanted to go ahead and make the switch, I selected the 'Install Now' from the right hand side. CD Spins up, black window pops up, and then the errors started: date/time GPU Lockup date/time Failed to idle channel 1 date/time PFIFO - playlist update failed date/time Failed to idle channel 2 date/time PFIFO - playlist update failed Thinking it might correct itself, I let it run and it would swap over to a GUI Screen that was locked up with major blurring/etc, then back to the command line with the errors. Eventually it said something along the lines of 'unknown status' and switched back to the GUI and froze. So, that's when I tried to see if I could find a BIOS upgrade for the nVidia GTX580 cards, and had no luck. So I thought, why not try to just run it from the Live CD and see if I can at least get a look at it, maybe if I could get it running try to do some sort of install from there and fix the driver issue. I rebooted, brought up the Live CD, and this time chose the left option / run from the CD. It brought me all the way in to the desktop, I saw my drives, the other icons, could move the mouse, etc for about 30 seconds and then it locked up completely. I've tried this a couple of times and get the same results every time. Hardware: Intel i7-3930K CPU @ 3.2GHz (12 CPUs) / MSI MS-7760 Motherboard / 32GB RAM / 2 x EVGA (nVidia) GeForce GTX 580 (4GB Ram each) So the question is: Is there any way to install 12.10 if you can't even get the Live CD to run (for more than 30 seconds)? My current hardware configuration is both of the GTX 580 cards have an SLI jumper on them, and I have 2 monitors on each card. (Ubuntu info obviously only shows on the main monitor from the failed installation and the attempt at running the Live CD). Perhaps opening the machine back up and removing the SLI Jumper and removing the other 3 monitors (so it only would have 1 video card with one monitor on it) would actually allow me to get 12.10 installed, then I could work on an nVidia Video Driver fix for the GTX 580, and then possibly hook up the other video card and monitors? Or is this something that they are currently aware of and may update with a future release in the next few days/weeks? Any thoughts or suggestions would be greatly appreciated, as I can't even try to fix the issue (assuming it is the nVidia drivers) if I can't even get it to install at all.

    Read the article

  • ZFS Storage Appliance ? ldap ??????

    - by user13138569
    ZFS Storage Appliance ? Openldap ????????? ???ldap ?????????????? Solaris 11 ? Openldap ????????????? ??? slapd.conf ??ldif ?????????? user01 ??????? ?????? slapd.conf # # See slapd.conf(5) for details on configuration options. # This file should NOT be world readable. # include /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/nis.schema # Define global ACLs to disable default read access. # Do not enable referrals until AFTER you have a working directory # service AND an understanding of referrals. #referral ldap://root.openldap.org pidfile /var/openldap/run/slapd.pid argsfile /var/openldap/run/slapd.args # Load dynamic backend modules: modulepath /usr/lib/openldap moduleload back_bdb.la # moduleload back_hdb.la # moduleload back_ldap.la # Sample security restrictions # Require integrity protection (prevent hijacking) # Require 112-bit (3DES or better) encryption for updates # Require 63-bit encryption for simple bind # security ssf=1 update_ssf=112 simple_bind=64 # Sample access control policy: # Root DSE: allow anyone to read it # Subschema (sub)entry DSE: allow anyone to read it # Other DSEs: # Allow self write access # Allow authenticated users read access # Allow anonymous users to authenticate # Directives needed to implement policy: # access to dn.base="" by * read # access to dn.base="cn=Subschema" by * read # access to * # by self write # by users read # by anonymous auth # # if no access controls are present, the default policy # allows anyone and everyone to read anything but restricts # updates to rootdn. (e.g., "access to * by * read") # # rootdn can always read and write EVERYTHING! ####################################################################### # BDB database definitions ####################################################################### database bdb suffix "dc=oracle,dc=com" rootdn "cn=Manager,dc=oracle,dc=com" # Cleartext passwords, especially for the rootdn, should # be avoid. See slappasswd(8) and slapd.conf(5) for details. # Use of strong authentication encouraged. rootpw secret # The database directory MUST exist prior to running slapd AND # should only be accessible by the slapd and slap tools. # Mode 700 recommended. directory /var/openldap/openldap-data # Indices to maintain index objectClass eq ?????????ldif???? dn: dc=oracle,dc=com objectClass: dcObject objectClass: organization dc: oracle o: oracle dn: cn=Manager,dc=oracle,dc=com objectClass: organizationalRole cn: Manager dn: ou=People,dc=oracle,dc=com objectClass: organizationalUnit ou: People dn: ou=Group,dc=oracle,dc=com objectClass: organizationalUnit ou: Group dn: uid=user01,ou=People,dc=oracle,dc=com uid: user01 objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount cn: user01 uidNumber: 10001 gidNumber: 10000 homeDirectory: /home/user01 userPassword: secret loginShell: /bin/bash shadowLastChange: 10000 shadowMin: 0 shadowMax: 99999 shadowWarning: 14 shadowInactive: 99999 shadowExpire: -1 ldap?????????????ZFS Storage Appliance??????? Configuration SERVICES LDAP ??Base search DN ?ldap??????????? ???? ldap ????????? user01 ???????????????? ???????????? user ????????? Unknown or invalid user ?????????????????? ????????????????Solaris 11 ???????????? ????????????? ldap ????????getent ??????????????? # svcadm enable svc:/network/nis/domain:default # svcadm enable ldap/client # ldapclient manual -a authenticationMethod=none -a defaultSearchBase=dc=oracle,dc=com -a defaultServerList=192.168.56.201 System successfully configured # getent passwd user01 user01:x:10001:10000::/home/user01:/bin/bash ????????? user01 ?????????????? # mount -F nfs -o vers=3 192.168.56.101:/export/user01 /mnt # su user01 bash-4.1$ cd /mnt bash-4.1$ touch aaa bash-4.1$ ls -l total 1 -rw-r--r-- 1 user01 10000 0 May 31 04:32 aaa ?????? ldap ??????????????????????????!

    Read the article

  • ?12c database ????Adaptive Execution Plans ????????

    - by Liu Maclean(???)
    12c R1 ????SQL??????- Adaptive Execution Plans ????????,???????optimizer ??????(runtime)???????????????, ????????????????????? SQL???????? ????????????, ?????????????????????????????????????????????????????????????adaptive plan ????????????????????????????????????,?????subplan???????????????????? ??????, ???????? ???????????????,?????????, ?????? ???????????????”???”????, ???????????????????buffer ???????  ????????????,?????,??????????????????? ???optimizer ?????????????????????????,?????????????????????????????????????????plan???? ??12C?????????????, ???????????????????,?????? ???????????? ????????????2???: Dynamic Plans????: ???????????????????????;??????,???optimizer??????????subplans??????????????, ???????????????????,?????????????? Reoptimization????: ?Dynamic Plans????,Reoptimization??????????????????????Reoptimization??,?????????????????????????,??reoptimization????? OPTIMIZER_ADAPTIVE_REPORTING_ONLY ???? report-only????????????????TRUE,?????????report-only????,???????????????,??????????????? Dynamic Plans ??????????????,????????????????????????, ?????????????,???????????,????????????????????????????????????????? ?????????????final plan??????????????default plan, ??final plan?default plan???????,????????????? subplan ???????????????,???????????????????????? ??????,???????statistics collector ?buffer???????????statistics collector?????????????????,???????????????????????????? ?????????????????????????????????????????,??????????,?????????????? ???????????,???????buffer???? ???????????????,?????????????????????????????,??????buffer,??????final plan? ????????,???????????????????????,????????????????? ?V$SQL??????IS_RESOLVED_DYNAMIC_PLAN??????????final plan???default plan? ??????dynamic plan ???????SQL PLAN directives?????? declare cursor PLAN_DIRECTIVE_IDS is select directive_id from DBA_SQL_PLAN_DIRECTIVES; begin for z in PLAN_DIRECTIVE_IDS loop DBMS_SPD.DROP_SQL_PLAN_DIRECTIVE(z.directive_id); end loop; end; / explain plan for select /*MALCEAN*/ product_name from oe.order_items o, oe.product_information p where o.unit_price=15 and quantity>1 and p.product_id=o.product_id; select * from table(dbms_xplan.display()); Plan hash value: 1255158658 www.askmaclean.com ------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 4 | 128 | 7 (0)| 00:00:01 | | 1 | NESTED LOOPS | | | | | | | 2 | NESTED LOOPS | | 4 | 128 | 7 (0)| 00:00:01 | |* 3 | TABLE ACCESS FULL | ORDER_ITEMS | 4 | 48 | 3 (0)| 00:00:01 | |* 4 | INDEX UNIQUE SCAN | PRODUCT_INFORMATION_PK | 1 | | 0 (0)| 00:00:01 | | 5 | TABLE ACCESS BY INDEX ROWID| PRODUCT_INFORMATION | 1 | 20 | 1 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("O"."UNIT_PRICE"=15 AND "QUANTITY">1) 4 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID") alter session set events '10053 trace name context forever,level 1'; OR alter session set events 'trace[SQL_Plan_Directive] disk highest'; select /*MALCEAN*/ product_name from oe.order_items o, oe.product_information p where o.unit_price=15 and quantity>1 and p.product_id=o.product_id; ---------------------------------------------------------------+-----------------------------------+ | Id | Operation | Name | Rows | Bytes | Cost | Time | ---------------------------------------------------------------+-----------------------------------+ | 0 | SELECT STATEMENT | | | | 7 | | | 1 | HASH JOIN | | 4 | 128 | 7 | 00:00:01 | | 2 | NESTED LOOPS | | | | | | | 3 | NESTED LOOPS | | 4 | 128 | 7 | 00:00:01 | | 4 | STATISTICS COLLECTOR | | | | | | | 5 | TABLE ACCESS FULL | ORDER_ITEMS | 4 | 48 | 3 | 00:00:01 | | 6 | INDEX UNIQUE SCAN | PRODUCT_INFORMATION_PK| 1 | | 0 | | | 7 | TABLE ACCESS BY INDEX ROWID | PRODUCT_INFORMATION | 1 | 20 | 1 | 00:00:01 | | 8 | TABLE ACCESS FULL | PRODUCT_INFORMATION | 1 | 20 | 1 | 00:00:01 | ---------------------------------------------------------------+-----------------------------------+ Predicate Information: ---------------------- 1 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID") 5 - filter(("O"."UNIT_PRICE"=15 AND "QUANTITY">1)) 6 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID") ===================================== SPD: BEGIN context at statement level ===================================== Stmt: ******* UNPARSED QUERY IS ******* SELECT /*+ OPT_ESTIMATE (@"SEL$1" JOIN ("P"@"SEL$1" "O"@"SEL$1") ROWS=13.000000 ) OPT_ESTIMATE (@"SEL$1" TABLE "O"@"SEL$1" ROWS=13.000000 ) */ "P"."PRODUCT_NAME" "PRODUCT_NAME" FROM "OE"."ORDER_ITEMS" "O","OE"."PRODUCT_INFORMATION" "P" WHERE "O"."UNIT_PRICE"=15 AND "O"."QUANTITY">1 AND "P"."PRODUCT_ID"="O"."PRODUCT_ID" Objects referenced in the statement PRODUCT_INFORMATION[P] 92194, type = 1 ORDER_ITEMS[O] 92197, type = 1 Objects in the hash table Hash table Object 92197, type = 1, ownerid = 6573730143572393221: No Dynamic Sampling Directives for the object Hash table Object 92194, type = 1, ownerid = 17822962561575639002: No Dynamic Sampling Directives for the object Return code in qosdInitDirCtx: ENBLD =================================== SPD: END context at statement level =================================== ======================================= SPD: BEGIN context at query block level ======================================= Query Block SEL$1 (#0) Return code in qosdSetupDirCtx4QB: NOCTX ===================================== SPD: END context at query block level ===================================== SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92197, objtyp = 1, vecsize = 6, colvec = [4, 5, ], fid = 2896834833840853267 SPD: Inserted felem, fid=2896834833840853267, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = YES, keep = YES SPD: qosdCreateFindingSingTab retCode = CREATED, fid = 2896834833840853267 SPD: qosdCreateDirCmp retCode = CREATED, fid = 2896834833840853267 SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = JOIN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SKIP_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = JOIN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_SCAN SPD: Return code in qosdDSDirSetup: NOCTX, estType = INDEX_FILTER SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92197, objtyp = 1, vecsize = 6, colvec = [4, 5, ], fid = 2896834833840853267 SPD: Modified felem, fid=2896834833840853267, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = YES, keep = YES SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92194, objtyp = 1, vecsize = 2, colvec = [1, ], fid = 5618517328604016300 SPD: Modified felem, fid=5618517328604016300, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = NO, keep = NO SPD: Generating finding id: type = 1, reason = 1, objcnt = 1, obItr = 0, objid = 92194, objtyp = 1, vecsize = 2, colvec = [1, ], fid = 1142802697078608149 SPD: Modified felem, fid=1142802697078608149, ftype = 1, freason = 1, dtype = 0, dstate = 0, dflag = 0, ver = NO, keep = NO SPD: Generating finding id: type = 1, reason = 2, objcnt = 2, obItr = 0, objid = 92194, objtyp = 1, vecsize = 0, obItr = 1, objid = 92197, objtyp = 1, vecsize = 0, fid = 1437680122701058051 SPD: Modified felem, fid=1437680122701058051, ftype = 1, freason = 2, dtype = 0, dstate = 0, dflag = 0, ver = NO, keep = NO select * from table(dbms_xplan.display_cursor(format=>'report')) ; ????report????adaptive plan Adaptive plan: ------------- This cursor has an adaptive plan, but adaptive plans are enabled for reporting mode only.  The plan that would be executed if adaptive plans were enabled is displayed below. ------------------------------------------------------------------------------------------ | Id  | Operation          | Name                | Rows  | Bytes | Cost (%CPU)| Time     | ------------------------------------------------------------------------------------------ |   0 | SELECT STATEMENT   |                     |       |       |     7 (100)|          | |*  1 |  HASH JOIN         |                     |     4 |   128 |     7   (0)| 00:00:01 | |*  2 |   TABLE ACCESS FULL| ORDER_ITEMS         |     4 |    48 |     3   (0)| 00:00:01 | |   3 |   TABLE ACCESS FULL| PRODUCT_INFORMATION |     1 |    20 |     1   (0)| 00:00:01 | ------------------------------------------------------------------------------------------ SQL> select SQL_ID,IS_RESOLVED_DYNAMIC_PLAN,sql_text from v$SQL WHERE SQL_TEXT like '%MALCEAN%' and sql_text not like '%like%'; SQL_ID IS -------------------------- -- SQL_TEXT -------------------------------------------------------------------------------- 6ydj1bn1bng17 Y select /*MALCEAN*/ product_name from oe.order_items o, oe.product_information p where o.unit_price=15 and quantity>1 and p.product_id=o.product_id ???? explain plan for ????default plan, ??????optimizer???final plan,??V$SQL.IS_RESOLVED_DYNAMIC_PLAN???Y,????????????? DBA_SQL_PLAN_DIRECTIVES?????????????SQL PLAN DIRECTIVES, ???12c? ???MMON?????DML ???column usage??????????,????SMON??? MMON????SGA??PLAN DIRECTIVES??? ?????DBMS_SPD.flush_sql_plan_directive???? select directive_id,type,reason from DBA_SQL_PLAN_DIRECTIVES / DIRECTIVE_ID TYPE REASON ----------------------------------- -------------------------------- ----------------------------- 10321283028317893030 DYNAMIC_SAMPLING JOIN CARDINALITY MISESTIMATE 4757086536465754886 DYNAMIC_SAMPLING JOIN CARDINALITY MISESTIMATE 16085268038103121260 DYNAMIC_SAMPLING JOIN CARDINALITY MISESTIMATE SQL> set pages 9999 SQL> set lines 300 SQL> col state format a5 SQL> col subobject_name format a11 SQL> col col_name format a11 SQL> col object_name format a13 SQL> select d.directive_id, o.object_type, o.object_name, o.subobject_name col_name, d.type, d.state, d.reason 2 from dba_sql_plan_directives d, dba_sql_plan_dir_objects o 3 where d.DIRECTIVE_ID=o.DIRECTIVE_ID 4 and o.object_name in ('ORDER_ITEMS') 5 order by d.directive_id; DIRECTIVE_ID OBJECT_TYPE OBJECT_NAME COL_NAME TYPE STATE REASON ------------ ------------ ------------- ----------- -------------------------------- ----- ------------------------------------- --- 1.8156E+19 COLUMN ORDER_ITEMS UNIT_PRICE DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE 1.8156E+19 TABLE ORDER_ITEMS DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE 1.8156E+19 COLUMN ORDER_ITEMS QUANTITY DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE DBA_SQL_PLAN_DIRECTIVES????? _BASE_OPT_DIRECTIVE ? _BASE_OPT_FINDING SELECT d.dir_own#, d.dir_id, d.f_id, decode(type, 1, 'DYNAMIC_SAMPLING', 'UNKNOWN'), decode(state, 1, 'NEW', 2, 'MISSING_STATS', 3, 'HAS_STATS', 4, 'CANDIDATE', 5, 'PERMANENT', 6, 'DISABLED', 'UNKNOWN'), decode(bitand(flags, 1), 1, 'YES', 'NO'), cast(d.created as timestamp), cast(d.last_modified as timestamp), -- Please see QOSD_DAYS_TO_UPDATE and QOSD_PLUS_SECONDS for more details -- about 6.5 cast(d.last_used as timestamp) - NUMTODSINTERVAL(6.5, 'day') FROM sys.opt_directive$ d ??dbms_spd??? SQL PLAN DIRECTIVES, SQL PLAN DIRECTIVES???retention ???53?: Package: DBMS_SPD This package provides subprograms for managing Sql Plan Directives(SPD). SPD are objects generated automatically by Oracle server. For example, if server detects that the single table cardinality estimated by optimizer is off from the actual number of rows returned when accessing the table, it will automatically create a directive to do dynamic sampling for the table. When any Sql statement referencing the table is compiled, optimizer will perform dynamic sampling for the table to get more accurate estimate. Notes: DBMSL_SPD is a invoker-rights package. The invoker requires ADMINISTER SQL MANAGEMENT OBJECT privilege for executing most of the subprograms of this package. Also the subprograms commit the current transaction (if any), perform the operation and commit it again. DBA view dba_sql_plan_directives shows all the directives created in the system and the view dba_sql_plan_dir_objects displays the objects that are included in the directives. -- Default value for SPD_RETENTION_WEEKS SPD_RETENTION_WEEKS_DEFAULT CONSTANT varchar2(4) := '53'; | STATE : NEW : Newly created directive. | : MISSING_STATS : The directive objects do not | have relevant stats. | : HAS_STATS : The objects have stats. | : PERMANENT : A permanent directive. Server | evaluated effectiveness and these | directives are useful. | | AUTO_DROP : YES : Directive will be dropped | automatically if not | used for SPD_RETENTION_WEEKS. | This is the default behavior. | NO : Directive will not be dropped | automatically. Procedure: flush_sql_plan_directive This procedure allows manually flushing the Sql Plan directives that are automatically recorded in SGA memory while executing sql statements. The information recorded in SGA are periodically flushed by oracle background processes. This procedure just provides a way to flush the information manually. ????”_optimizer_dynamic_plans”(enable dynamic plans)????????,???TRUE??DYNAMIC PLAN? ???FALSE???????????? ????,Dynamic Plan????????????Nested Loop?Hash Join???case ,????????Nested loop???????????HASH JOIN,?HASH JOIN????????????????? ????????subplan?????,???? pass?? ?join method???,?????STATISTICS COLLECTOR???cardinality?,???????HASH JOIN?????Nested Loop,????????????subplan?????access path; ???????Sales??????????????????,????HASH JOIN,??SUBPLAN??customers?????????;?????Nested Loop,???????cust_id?????Range Scan+Access by Rowid? Cardinality feedback Cardinality feedback????????11.2????,????????re-optimization???;  ???????????,Cardinality feedback?????????????????????????? ???????????????????,?????????????????,??????????Cardinality feedback????????????? ????????????????????????? ??????????????Cardinality feedback ??: ????????,???????????,??????????,????????????????selectivity ??? ????????????: ??????,?????????????????????????????????,??????????????????? ????????????????????????????????????????,?????????????????????????? ?????????,???????????????,?????????? ??????????Cardinality ????,??????join Cardinality ????????? Cardinality feedback???????cursor?,?Cursor???aged out????? SELECT /*+ gather_plan_statistics */ product_name FROM order_items o, product_information p WHERE o.unit_price = 15 AND quantity > 1 AND p.product_id = o.product_id Plan hash value: 1553478007 ---------------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem | ---------------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 13 |00:00:00.01 | 24 | 20 | | | | |* 1 | HASH JOIN | | 1 | 4 | 13 |00:00:00.01 | 24 | 20 | 2061K| 2061K| 429K (0)| |* 2 | TABLE ACCESS FULL| ORDER_ITEMS | 1 | 4 | 13 |00:00:00.01 | 7 | 6 | | | | | 3 | TABLE ACCESS FULL| PRODUCT_INFORMATION | 1 | 1 | 288 |00:00:00.01 | 17 | 14 | | | | ---------------------------------------------------------------------------------------------------------------------------------------- SELECT /*+ gather_plan_statistics */ product_name FROM order_items o, product_information p WHERE o.unit_price = 15 AND quantity > 1 AND p.product_id = o.product_id Plan hash value: 1553478007 ------------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem | ------------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 13 |00:00:00.01 | 24 | | | | |* 1 | HASH JOIN | | 1 | 13 | 13 |00:00:00.01 | 24 | 2061K| 2061K| 413K (0)| |* 2 | TABLE ACCESS FULL| ORDER_ITEMS | 1 | 13 | 13 |00:00:00.01 | 7 | | | | | 3 | TABLE ACCESS FULL| PRODUCT_INFORMATION | 1 | 288 | 288 |00:00:00.01 | 17 | | | | ------------------------------------------------------------------------------------------------------------------------------- Note ----- - statistics feedback used for this statement SQL> select count(*) from v$SQL where SQL_ID='cz0hg2zkvd10y'; COUNT(*) ---------- 2 SQL>select sql_ID,USE_FEEDBACK_STATS FROM V$SQL_SHARED_CURSOR where USE_FEEDBACK_STATS ='Y'; SQL_ID U ------------- - cz0hg2zkvd10y Y ????????Cardinality feedback????,???????????????????????????,????????????order_items???????? ????2??????plan hash value??(??????????),?????2????child cursor??????gather_plan_statistics???actual : A-ROWS  estimate :E-ROWS????????? Automatic Re-optimization ???dynamic plan, Re-optimization???????????????  ?  ??????????????? ????????????????????????????????  ???????????,??????????????, ???????????????????? ???????????  Re-optimization??, ????????????????????? Re-optimization????dynamic plan??????????  dynamic plan????????????????????, ???????????????????? ????,??????????join order ??????????????,?????????????join order????? ??????,????????Re-optimization, ??Re-optimization ??????????????????? ?Oracle database 12c?,join statistics?????????????????????,??????????????????????Re-optimization???????????adaptive cursor sharing????? ????????????????,???????????? ????? ???????statistics collectors ????????????????????Re-optimization??????2?????????????,???????????????? ??????????????Re-optimization?????,?????????????????????? ???v$SQL??????IS_REOPTIMIZABLE?????????????????????Re-optimization,??????????Re-optimization???,?????Re-optimization ,???????reporting????? IS_REOPTIMIZABLE VARCHAR2(1) This columns shows whether the next execution matching this child cursor will trigger a reoptimization. The values are:   Y: If the next execution will trigger a reoptimization R: If the child cursor contains reoptimization information, but will not trigger reoptimization because the cursor was compiled in reporting mode N: If the child cursor has no reoptimization information ??1: select plan_table_output from table (dbms_xplan.display_cursor('gwf99gfnm0t7g',NULL,'ALLSTATS LAST')); SQL_ID  gwf99gfnm0t7g, child number 0 ------------------------------------- SELECT /*+ SFTEST gather_plan_statistics */ o.order_id, v.product_name FROM  orders o,   ( SELECT order_id, product_name FROM order_items o, product_information p     WHERE  p.product_id = o.product_id AND list_price < 50 AND min_price < 40  ) v WHERE o.order_id = v.order_id Plan hash value: 1906736282 ------------------------------------------------------------------------------------------------------------------------------------------- | Id  | Operation             | Name                | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem | ------------------------------------------------------------------------------------------------------------------------------------------- |   0 | SELECT STATEMENT      |                     |      1 |        |    269 |00:00:00.02 |    1336 |     18 |       |       |          | |   1 |  NESTED LOOPS         |                     |      1 |      1 |    269 |00:00:00.02 |    1336 |     18 |       |       |          | |   2 |   MERGE JOIN CARTESIAN|                     |      1 |      4 |   9135 |00:00:00.02 |      34 |     15 |       |       |          | |*  3 |    TABLE ACCESS FULL  | PRODUCT_INFORMATION |      1 |      1 |     87 |00:00:00.01 |      33 |     14 |       |       |          | |   4 |    BUFFER SORT        |                     |     87 |    105 |   9135 |00:00:00.01 |       1 |      1 |  4096 |  4096 | 4096  (0)| |   5 |     INDEX FULL SCAN   | ORDER_PK            |      1 |    105 |    105 |00:00:00.01 |       1 |      1 |       |       |          | |*  6 |   INDEX UNIQUE SCAN   | ORDER_ITEMS_UK      |   9135 |      1 |    269 |00:00:00.01 |    1302 |      3 |       |       |          | ------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------    3 - filter(("MIN_PRICE"<40 AND "LIST_PRICE"<50))    6 - access("O"."ORDER_ID"="ORDER_ID" AND "P"."PRODUCT_ID"="O"."PRODUCT_ID") SQL_ID  gwf99gfnm0t7g, child number 1 ------------------------------------- SELECT /*+ SFTEST gather_plan_statistics */ o.order_id, v.product_name FROM  orders o,   ( SELECT order_id, product_name FROM order_items o, product_information p     WHERE  p.product_id = o.product_id AND list_price < 50 AND min_price < 40  ) v WHERE o.order_id = v.order_id Plan hash value: 35479787 -------------------------------------------------------------------------------------------------------------------------------------------- | Id  | Operation              | Name                | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem | -------------------------------------------------------------------------------------------------------------------------------------------- |   0 | SELECT STATEMENT       |                     |      1 |        |    269 |00:00:00.01 |      63 |      3 |       |       |          | |   1 |  NESTED LOOPS          |                     |      1 |    269 |    269 |00:00:00.01 |      63 |      3 |       |       |          | |*  2 |   HASH JOIN            |                     |      1 |    313 |    269 |00:00:00.01 |      42 |      3 |  1321K|  1321K| 1234K (0)| |*  3 |    TABLE ACCESS FULL   | PRODUCT_INFORMATION |      1 |     87 |     87 |00:00:00.01 |      16 |      0 |       |       |          | |   4 |    INDEX FAST FULL SCAN| ORDER_ITEMS_UK      |      1 |    665 |    665 |00:00:00.01 |      26 |      3 |       |       |          | |*  5 |   INDEX UNIQUE SCAN    | ORDER_PK            |    269 |      1 |    269 |00:00:00.01 |      21 |      0 |       |       |          | -------------------------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------    2 - access("P"."PRODUCT_ID"="O"."PRODUCT_ID")    3 - filter(("MIN_PRICE"<40 AND "LIST_PRICE"<50))    5 - access("O"."ORDER_ID"="ORDER_ID") Note -----    - statistics feedback used for this statement    SQL> select IS_REOPTIMIZABLE,child_number FROM V$SQL  A where A.SQL_ID='gwf99gfnm0t7g'; IS CHILD_NUMBER -- ------------ Y             0 N             1    1* select child_number,other_xml From v$SQL_PLAN  where SQL_ID='gwf99gfnm0t7g' and other_xml is not nul SQL> / CHILD_NUMBER OTHER_XML ------------ --------------------------------------------------------------------------------            1 <other_xml><info type="cardinality_feedback">yes</info><info type="db_version">1              2.1.0.1</info><info type="parse_schema"><![CDATA["OE"]]></info><info type="plan_              hash">35479787</info><info type="plan_hash_2">3382491761</info><outline_data><hi              nt><![CDATA[IGNORE_OPTIM_EMBEDDED_HINTS]]></hint><hint><![CDATA[OPTIMIZER_FEATUR              ES_ENABLE('12.1.0.1')]]></hint><hint><![CDATA[DB_VERSION('12.1.0.1')]]></hint><h              int><![CDATA[ALL_ROWS]]></hint><hint><![CDATA[OUTLINE_LEAF(@"SEL$F5BB74E1")]]></              hint><hint><![CDATA[MERGE(@"SEL$2")]]></hint><hint><![CDATA[OUTLINE(@"SEL$1")]]>              </hint><hint><![CDATA[OUTLINE(@"SEL$2")]]></hint><hint><![CDATA[FULL(@"SEL$F5BB7              4E1" "P"@"SEL$2")]]></hint><hint><![CDATA[INDEX_FFS(@"SEL$F5BB74E1" "O"@"SEL$2"              ("ORDER_ITEMS"."ORDER_ID" "ORDER_ITEMS"."PRODUCT_ID"))]]></hint><hint><![CDATA[I              NDEX(@"SEL$F5BB74E1" "O"@"SEL$1" ("ORDERS"."ORDER_ID"))]]></hint><hint><![CDATA[              LEADING(@"SEL$F5BB74E1" "P"@"SEL$2" "O"@"SEL$2" "O"@"SEL$1")]]></hint><hint><![C              DATA[USE_HASH(@"SEL$F5BB74E1" "O"@"SEL$2")]]></hint><hint><![CDATA[USE_NL(@"SEL$              F5BB74E1" "O"@"SEL$1")]]></hint></outline_data></other_xml>            0 <other_xml><info type="db_version">12.1.0.1</info><info type="parse_schema"><![C              DATA["OE"]]></info><info type="plan_hash">1906736282</info><info type="plan_hash              _2">2579473118</info><outline_data><hint><![CDATA[IGNORE_OPTIM_EMBEDDED_HINTS]]>              </hint><hint><![CDATA[OPTIMIZER_FEATURES_ENABLE('12.1.0.1')]]></hint><hint><![CD              ATA[DB_VERSION('12.1.0.1')]]></hint><hint><![CDATA[ALL_ROWS]]></hint><hint><![CD              ATA[OUTLINE_LEAF(@"SEL$F5BB74E1")]]></hint><hint><![CDATA[MERGE(@"SEL$2")]]></hi              nt><hint><![CDATA[OUTLINE(@"SEL$1")]]></hint><hint><![CDATA[OUTLINE(@"SEL$2")]]>              </hint><hint><![CDATA[FULL(@"SEL$F5BB74E1" "P"@"SEL$2")]]></hint><hint><![CDATA[              INDEX(@"SEL$F5BB74E1" "O"@"SEL$1" ("ORDERS"."ORDER_ID"))]]></hint><hint><![CDATA              [INDEX(@"SEL$F5BB74E1" "O"@"SEL$2" ("ORDER_ITEMS"."ORDER_ID" "ORDER_ITEMS"."PROD              UCT_ID"))]]></hint><hint><![CDATA[LEADING(@"SEL$F5BB74E1" "P"@"SEL$2" "O"@"SEL$1              " "O"@"SEL$2")]]></hint><hint><![CDATA[USE_MERGE_CARTESIAN(@"SEL$F5BB74E1" "O"@"              SEL$1")]]></hint><hint><![CDATA[USE_NL(@"SEL$F5BB74E1" "O"@"SEL$2")]]></hint></o              utline_data></other_xml> ??2: SELECT /*+gather_plan_statistics*/ * FROM customers WHERE cust_state_province='CA' AND country_id='US'; SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST')); PLAN_TABLE_OUTPUT ------------------------------------- SQL_ID b74nw722wjvy3, child number 0 ------------------------------------- select /*+gather_plan_statistics*/ * from customers where CUST_STATE_PROVINCE='CA' and country_id='US' Plan hash value: 1683234692 -------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | -------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 29 |00:00:00.01 | 17 | 14 | |* 1 | TABLE ACCESS FULL| CUSTOMERS | 1 | 8 | 29 |00:00:00.01 | 17 | 14 | -------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(("CUST_STATE_PROVINCE"='CA' AND "COUNTRY_ID"='US')) SELECT SQL_ID, CHILD_NUMBER, SQL_TEXT, IS_REOPTIMIZABLE FROM V$SQL WHERE SQL_TEXT LIKE 'SELECT /*+gather_plan_statistics*/%'; SQL_ID CHILD_NUMBER SQL_TEXT I ------------- ------------ ----------- - b74nw722wjvy3 0 select /*+g Y ather_plan_ statistics* / * from cu stomers whe re CUST_STA TE_PROVINCE ='CA' and c ountry_id=' US' EXEC DBMS_SPD.FLUSH_SQL_PLAN_DIRECTIVE; SELECT TO_CHAR(d.DIRECTIVE_ID) dir_id, o.OWNER, o.OBJECT_NAME, o.SUBOBJECT_NAME col_name, o.OBJECT_TYPE, d.TYPE, d.STATE, d.REASON FROM DBA_SQL_PLAN_DIRECTIVES d, DBA_SQL_PLAN_DIR_OBJECTS o WHERE d.DIRECTIVE_ID=o.DIRECTIVE_ID AND o.OWNER IN ('SH') ORDER BY 1,2,3,4,5; DIR_ID OWNER OBJECT_NAME COL_NAME OBJECT TYPE STATE REASON ----------------------- ----- ------------- ----------- ------ ---------------- ----- ------------------------ 1484026771529551585 SH CUSTOMERS COUNTRY_ID COLUMN DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE 1484026771529551585 SH CUSTOMERS CUST_STATE_ COLUMN DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY PROVINCE MISESTIMATE 1484026771529551585 SH CUSTOMERS TABLE DYNAMIC_SAMPLING NEW SINGLE TABLE CARDINALITY MISESTIMATE SELECT /*+gather_plan_statistics*/ * FROM customers WHERE cust_state_province='CA' AND country_id='US'; ELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST')); PLAN_TABLE_OUTPUT ------------------------------------- SQL_ID b74nw722wjvy3, child number 1 ------------------------------------- select /*+gather_plan_statistics*/ * from customers where CUST_STATE_PROVINCE='CA' and country_id='US' Plan hash value: 1683234692 ----------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | ----------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 29 |00:00:00.01 | 17 | |* 1 | TABLE ACCESS FULL| CUSTOMERS | 1 | 29 | 29 |00:00:00.01 | 17 | ----------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(("CUST_STATE_PROVINCE"='CA' AND "COUNTRY_ID"='US')) Note ----- - cardinality feedback used for this statement SELECT SQL_ID, CHILD_NUMBER, SQL_TEXT, IS_REOPTIMIZABLE FROM V$SQL WHERE SQL_TEXT LIKE 'SELECT /*+gather_plan_statistics*/%'; SQL_ID CHILD_NUMBER SQL_TEXT I ------------- ------------ ----------- - b74nw722wjvy3 0 select /*+g Y ather_plan_ statistics* / * from cu stomers whe re CUST_STA TE_PROVINCE ='CA' and c ountry_id=' US' b74nw722wjvy3 1 select /*+g N ather_plan_ statistics* / * from cu stomers whe re CUST_STA TE_PROVINCE ='CA' and c ountry_id=' US' SELECT /*+gather_plan_statistics*/ CUST_EMAIL FROM CUSTOMERS WHERE CUST_STATE_PROVINCE='MA' AND COUNTRY_ID='US'; SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(FORMAT=>'ALLSTATS LAST')); PLAN_TABLE_OUTPUT ------------------------------------- SQL_ID 3tk6hj3nkcs2u, child number 0 ------------------------------------- Select /*+gather_plan_statistics*/ cust_email From customers Where cust_state_province='MA' And country_id='US' Plan hash value: 1683234692 ------------------------------------------------------------------------------- |Id | Operation | Name | Starts|E-Rows|A-Rows| A-Time |Buffers| ------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 2 |00:00:00.01| 16 | |*1 | TABLE ACCESS FULL| CUSTOMERS | 1 | 2| 2 |00:00:00.01| 16 | ----------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(("CUST_STATE_PROVINCE"='MA' AND "COUNTRY_ID"='US')) Note ----- - dynamic sampling used for this statement (level=2) - 1 Sql Plan Directive used for this statement EXEC DBMS_SPD.FLUSH_SQL_PLAN_DIRECTIVE; SELECT TO_CHAR(d.DIRECTIVE_ID) dir_id, o.OWNER, o.OBJECT_NAME, o.SUBOBJECT_NAME col_name, o.OBJECT_TYPE, d.TYPE, d.STATE, d.REASON FROM DBA_SQL_PLAN_DIRECTIVES d, DBA_SQL_PLAN_DIR_OBJECTS o WHERE d.DIRECTIVE_ID=o.DIRECTIVE_ID AND o.OWNER IN ('SH') ORDER BY 1,2,3,4,5; DIR_ID OW OBJECT_NA COL_NAME OBJECT TYPE STATE REASON ------------------- -- --------- ---------- ------- --------------- ------------- ------------------------ 1484026771529551585 SH CUSTOMERS COUNTRY_ID COLUMN DYNAMIC_SAMPLING MISSING_STATS SINGLE TABLE CARDINALITY MISESTIMATE 1484026771529551585 SH CUSTOMERS CUST_STATE_ COLUMN DYNAMIC_SAMPLING MISSING_STATS SINGLE TABLE CARDINALITY PROVINCE MISESTIMATE 1484026771529551585 SH CUSTOMERS TABLE DYNAMIC_SAMPLING MISSING_STATS SINGLE TABLE CARDINALITY MISESTIMATE

    Read the article

  • Data Formatters temporarily unavailable

    - by iphone newbie
    I'm currently working on an iPhone app. This app has a login screen, also a signup screen. After the user has successfully signed up, I dismiss the signup view, then the app automatically logs in using the created account. After which, the login view is dismissed, showing the main view. I'm trying to modify this by immediately dismissing the login view, since I already have the account details of the user when the signup is successful. Basically, the ideal flow is: after the user successfully signs up, I save the username and password in a singleton class, then dismiss the signup view. When I get to the parent view (which is the login screen), I have a variable that checks if there was a successful signup. If that variable is true, I want to immediately dismiss the login view. However, I come across this error message: Data Formatters temporarily unavailable, will re-try after a 'continue'. (Unknown error loading shared library "/Developer/usr/lib/libXcodeDebuggerSupport.dylib") I'm not really sure why this happens. I have no problems dismissing the login view when I go through the actual login procedure - which of course also dismisses the login view if the user inputs a correct username and password. I'm not exactly sure, but I'm starting to think that the iPhone cannot handle dismissing 2 view controllers almost at the same time. Is it possible that I'm dismissing the login view too quickly? Is that a factor? Is there anyway for me to be able to dismiss 2 view controllers almost simultaneously without coming across this error message?

    Read the article

  • AD - Using UserPrincipal.FindByIdentity and PrincipalContext with nested OU - C#

    - by Solid Snake
    Here is what I am trying to achieve: I have a nested OU structure that is about 5 levels deep. OU=Portal,OU=Dev,OU=Apps,OU=Grps,OU=Admin,DC=test,DC=com I am trying to find out if the user has permissions/exists at OU=Portal. Here's a snippet of what I currently have: PrincipalContext domain = new PrincipalContext( ContextType.Domain, "test.com", "OU=Portal,OU=Dev,OU=Apps,OU=Grps,OU=Admin,DC=test,DC=com"); UserPrincipal user = UserPrincipal.FindByIdentity(domain, myusername); PrincipalSearchResult<Principal> group = user.GetAuthorizationGroups(); For some unknown reason, the value user generated from the above code is always null. However, if I were to drop all the OU as follows: PrincipalContext domain = new PrincipalContext( ContextType.Domain, "test.com", "DC=test,DC=com"); UserPrincipal user = UserPrincipal.FindByIdentity(domain, myusername); PrincipalSearchResult<Principal> group = user.GetAuthorizationGroups(); this would work just fine and return me the correct user. I am simply trying to reduce the number of results as opposed to getting everything from AD. Is there anything that I am doing wrong? I've googled for hours and tested various combinations without much luck. Any help is appreciated. Thanks. Dan

    Read the article

  • Java jasper report NullPointerException

    - by William Welch
    I am new to Java and I am running into this issue that I can't figure out. I inherited this project and I have the following code in one of my scriptlets: DefaultLogger.logMessage("DEBUG path: "+ reportFile.getPath()); DefaultLogger.logMessage("DEBUG parameters: "+ parameters); DefaultLogger.logMessage("DEBUG jr: "+ jr); bytes = JasperRunManager.runReportToPdf(reportFile.getPath(), parameters, jr); And I am getting the following results (the fourth line there is line 287 in FootwearReportsServlet.doGet): DEBUG path: C:\Development\JavaWorkspaces\Workspace1\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\RSLDevelopmentStandard\reports\templates\footwear\RslFootwearReport.jasper DEBUG parameters: {class_list_subreport=net.sf.jasperreports.engine.JasperReport@b07af1, signature_path=images/logo_reports.jpg, submission_id=20070213154168780, test_results_subreport=net.sf.jasperreports.engine.JasperReport@5795ce, logo_path=images/logos_3.gif, report_connection_secondary=com.mysql.jdbc.JDBC4Connection@2c39d2, testing_packages_subreport=net.sf.jasperreports.engine.JasperReport@1883d5f, signature_path2=images/logo_reports.jpg, tpid_subreport=net.sf.jasperreports.engine.JasperReport@17531fd, first_page_subreport=net.sf.jasperreports.engine.JasperReport@12504e0} DEBUG jr: net.sf.jasperreports.engine.data.JRMapCollectionDataSource@1630eb6 Apr 29, 2010 4:15:13 PM org.apache.catalina.core.StandardWrapperValve invoke SEVERE: Servlet.service() for servlet FootwearReportsServlet threw exception java.lang.NullPointerException at net.sf.jasperreports.engine.fill.JRFiller.fillReport(JRFiller.java:89) at net.sf.jasperreports.engine.JasperFillManager.fillReport(JasperFillManager.java:601) at net.sf.jasperreports.engine.JasperFillManager.fillReport(JasperFillManager.java:517) at net.sf.jasperreports.engine.JasperRunManager.runReportToPdf(JasperRunManager.java:385) at com.rsl.reports.FootwearReportsServlet.doGet(FootwearReportsServlet.java:287) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Unknown Source) What I can't figure out is where the null reference is. From the debug lines I can see that each parameter has a value. Could it be referring to a bad path to one of the images? Any ideas? For some reason my server won't start in debug mode in Eclipse so I am having trouble figuring this out.

    Read the article

  • Detect blocked popup in Chrome

    - by Andrew
    I am aware of javascript techniques to detect whether a popup is blocked in other browsers (as described in the answer to this question). Here's the basic test: var newWin = window.open(url); if(!newWin || newWin.closed || typeof newWin.closed=='undefined') { //POPUP BLOCKED } But this does not work in Chrome. The "POPUP BLOCKED" section is never reached when the popup is blocked. Of course, the test is working to an extent since Chrome doesn't actually block the popup, but opens it in a tiny minimized window at the lower right corner which lists "blocked" popups. What I would like to do is be able to tell if the popup was blocked by Chrome's popup blocker. I try to avoid browser sniffing in favor of feature detection. Is there a way to do this without browser sniffing? Edit: I have now tried making use of newWin.outerHeight, newWin.left, and other similar properties to accomplish this. Google Chrome returns all position and height values as 0 when the popup is blocked. Unfortunately, it also returns the same values even if the popup is actually opened for an unknown amount of time. After some magical period (a couple of seconds in my testing), the location and size information is returned as the correct values. In other words, I'm still no closer to figuring this out. Any help would be appreciated.

    Read the article

  • webservice CopyIntoItems is not working to upload file to sharepoint

    - by Joeri
    The following piece of C# is always failing with 1 Unknown Object reference not set to an instance of an object Anybody some idea what i am missing? try { //Copy WebService Settings String strUserName = "abc"; String strPassword = "abc"; String strDomain = "SVR03"; String FileName = "Filename.xls"; WebReference.Copy copyService = new WebReference.Copy(); copyService.Url = "http://192.168.11.253/_vti_bin/copy.asmx"; copyService.Credentials = new NetworkCredential (strUserName, strPassword, strDomain); // Filestream of attachment FileStream MyFile = new FileStream(@"C:\temp\28200.xls", FileMode.Open, FileAccess.Read); // Read the attachment in to a variable byte[] Contents = new byte[MyFile.Length]; MyFile.Read(Contents, 0, (int)MyFile.Length); MyFile.Close(); //Change file name if not exist then create new one String[] destinationUrl = { "http://192.168.11.253/Shared Documents/28200.xls" }; // Setup some SharePoint metadata fields WebReference.FieldInformation fieldInfo = new WebReference.FieldInformation(); WebReference.FieldInformation[] ListFields = { fieldInfo }; //Copy the document from Local to SharePoint WebReference.CopyResult[] result; uint NewListId = copyService.CopyIntoItems (FileName, destinationUrl, ListFields, Contents, out result); if (result.Length < 1) Console.WriteLine("Unable to create a document library item"); else { Console.WriteLine( result.Length ); Console.WriteLine( result[0].ErrorCode ); Console.WriteLine( result[0].ErrorMessage ); Console.WriteLine( result[0].DestinationUrl); } } catch (Exception ex) { Console.WriteLine("Exception: {0}", ex.Message); }

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >