Search Results

Search found 28864 results on 1155 pages for 'ob start'.

Page 222/1155 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • ADDS: 1 - Introducing and designing

    - by marc dekeyser
    Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} What is ADDS?  Every Microsoft oriented infrastructure in today's enterprises will depend largely on the active directory version built by Microsoft. It is the foundation stone on which all other products (Exchange, update services, office communicator, the system center family, etc) rely on to get their information. And that is just looking at it from an infrastructure perspective. A well designed and implemented Active Directory implementation makes life for IT personnel and user alike a lot easier. Centralised management and the abilities opened up  by having it in place are ample.  But what is Active Directory Domain Services? We can look at ADDS as a centralised directory containing all objects your infrastructure runs on in one way or another. Since it is a Microsoft product you'll obviously not be seeing linux or mac clients listed in here (exceptions exist) but in general we can say it contains everything your company has in place in one form or another.  The domain name services. The domain naming service (or DNS for short) is a service which translates IP address (the identifiers for each computer in your domain) into readable and easy to understand names. This service is a prequisite for ADDA to work and having wrong record in a DNS server will make any ADDS service fail. Generally speaking a DNS service will be run on the same server as the ADDS service but it is worth wile to remember that this is not necessary. You could, for example, run your DNS services on a linux box (which would need special preparing to host an ADDS integrated DNS zone) and run the ADDS service of another box… Where to start? If the aim is to put in place a first time implementation of ADDS in your enterprise there are plenty of things to consider depending on what you are going to do in the long run. Great care has to be taken when first designing and implementing as having it set up wrong will cause a headache down the line. It is for that reason that I like to start building from the bottom up and start with a generic installation of ADDS (which will still differ for every client) and make it adaptable for future services which can hook in to the existing environment. Adapting existing environments is out of scope for this document (and series) although it is possible to take the pointers and change your existing environment to run in a smoother manor. Take great care when changing things as one small slip of the hand can give you a forest wide failure… Whenever starting with an ADDS deployment I ask the client the following questions:  What are your long term plans and goals?  How flexible do you want it? Are you currently linux heavy and want to keep this or can we go for an all Microsoft design? Those three questions should give some sort of indicator what direction can be taken and if the client has thought about some things themselves :).  The technical side of things  What is next to consider is what kind of infrastructure is already in place. For these series I'll keep it simple and introduce some general concepts without going in to depth on integrating ADDS with other DNS services.  Building from the ground up means we need to consider our layers on which our infrastructure will rely. In my view that goes as follows:  Network (WAN/LAN links and physical sites DNS Namespacing All in one domain or split up in different domains/forests? Security (both for ADDS and physical sites) The network side of things  Looking at how the network is currently set up can potentially teach us a large deal about the client. Do they have multiple physical site? What network speeds exist between these sites, etc… Depending on this information we will design our site links (which controls replication) in future stages. DNS Namespacing Maybe the single most intresting thing to know is what the domain will be named (ADDS will need a DNS domain with the same name) and where this will be hosted. Note that active directory can be set up with a singe name (aka contoso instead of contoso.com) but it is highly recommended to never do this. If you do end up with a domain like that for some reason there will be a lot of services that are going to give you good grief in the future (exchange being one of them). So one of the best practises would be always to use a double name (contoso.com or contoso.lan for example). Internal namespace A single namespace is just what it sounds like. You have a DNS domain which is different internally from what the client has as an external namespace. f.e. contoso.com as an external name (out on the internet) and contoso.lan on the internal network. his setup is has its advantages in that you have more obscurity from the internet in the DNS side of this but it will require additional work to publish services to the web. External namespace Quite like the internal namespace only here you do not differ the internal namespace of the company from what is known on the internet. In this implementation you would host your own DNS servers for the external domain inside the network. Or in other words, any external computer doing a DNS lookup would contact your internal DNS server for the resolution. Generally speaking this set up is a bad idea from the security side of things. Split DNS Whilst using an external namespace design is fairly easy it involves a lot of security risks. Opening up you ADDS DSN servers for lookups exposes your entire network to the internet and should be avoided at any cost. And that is where the "split DNS" design comes in. In this setup up would still have the same namespace internally and externally but you would be using different DNS servers for lookups on the external network who have no records of your internal resources unless you explicitly publish them. All in one or not? In determining your active directory design you can look at the following possibilities:  Single forest, Single domain Single forest, multiple domains Multiple forests, multiple domains I've listed the possibilities for design in increasing order of administrative magnitude. Microsoft recommends trying to use a single forest, single domain in as much situations as possible. It is, however, always possible that you require your services to be seperated from your users in a resource forest with trusts set up between the different forests. To start out I would go with the single forest design to avoid complexity unless there are strict requirements to have multiple forests. Security What kind of security is required on the domain and does this reflect the physical security on the sites? Not every client can afford to have a domain controller in a secluded server room on every site and it is exactly for that reason that Microsoft introduced the RODC (read only domain controller). A RODC is a domain controller that has been limited in functionality, in essence it will only cache the data you explicitly tell it to cache and in the case of a DC compromise (it being stolen) only a limited number of accounts will need to be affected. Th- Th- Th- That’s all folks! Well at least for now! In future editions of this series we’ll be walking through the different task that need to be done and the thought which needs to be put in to it. But for all editions we’ll be going from the concept of running a single forest, single domain with a split DNS setup… See you next time!

    Read the article

  • TRADACOMS Support in B2B

    - by Dheeraj Kumar
    TRADACOMS is an initial standard for EDI. Predominantly used in the retail sector of United Kingdom. This is similar to EDIFACT messaging system, involving ecs file for translation and validation of messages. The slight difference between EDIFACT and TRADACOMS is, 1. TRADACOMS is a simpler version than EDIFACT 2. There is no Functional Acknowledgment in TRADACOMS 3. Since it is just a business message to be sent to the trading partner, the various reference numbers at STX, BAT, MHD level need not be persisted in B2B as there is no Business logic gets derived out of this. Considering this, in AS11 B2B, this can be handled out of the box using Positional Flat file document plugin. Since STX, BAT segments which define the envelope details , and part of transaction, has to be sent from the back end application itself as there is no Document protocol parameters defined in B2B. These would include some of the identifiers like SenderCode, SenderName, RecipientCode, RecipientName, reference numbers. Additionally the batching in this case be achieved by sending all the messages of a batch in a single xml from backend application, containing total number of messages in Batch as part of EOB (Batch trailer )segment. In the case of inbound scenario, we can identify the document based on start position and end position of the incoming document. However, there is a plan to identify the incoming document based on Tradacom standard instead of start/end position. Please email to [email protected] if you need a working sample.

    Read the article

  • My first encounter with SmartAssembly

    - by Peter Larsson
    Let me start by writing I am a supreme VB6 programmer, but I have very little experience with VB.Net, so I think I still need some more time learning SmartAssembly. SmartAssembly make obfuscating and merging dll files a piece of cake! With it's simple, straight forward and clean GUI I did make my tests work. With other obfuscators like Xenocode, Salamander etc which lets you (and in some cases forces you) control more advanced settings, you really have to know what you are doing. Especially when it comes to protecting code that uses external dependencies. My most annoying experience is that if you start checking radio buttons and activating different obfuscating features in SmartAssembly, you will end up breaking your working code as well, if you like me is not that experienced and don't know what you’re doing. SmartAssembly have some troubleshooting information on their website which explains why the application will fail in some scenarios. So why not extend these checks in some deeper analyzing stage on the dll's? By doing that I think more people could get fully functional dll's out of the box instead of trying different settings and then test the protected dll and see if it's working or not. //Peter

    Read the article

  • How to Audit and Monitor BI Publisher Reports Access?

    - by kanichiro.nishida
    Do you know who is accessing to which report at what time at your reporting environment ? As you delivered the BI Publisher reports to the production environment and your users start using them as part of their daily business operations you might wonder such questions. With compliance becoming an integral part of any business requirement, auditing your reporting environment is also becoming one of the most critical and hot agenda in today’s enterprise reporting deployments. Also, I believe that auditing the reporting environment is not just for the compliance, but also the way to understand how your users are using the reports and be able to improve the user reporting experience. BI Publisher have introduced Enterprise Level Auditing feature with its 11G release, with an integration of Oracle Fusion Middleware Audit Framework, which comes out of the box with the installation. Yes, this is another great example of the benefit of its tight integration with Fusion Middleware introduced with BI Publisher 11g release. What Information Can I Know about our Reporting Environment? With this new Auditing feature you can now gain the following insights. When a particular user login or logout What report is accessed by who and when and how How long does it take to process a particular report Yes, it’s all there. This is a great news for 10G users, right ? I used to be one of them working with many different IT organizations and were craving for this, but it’s here now with 11G! How Can I Access to the Auditing Information? With the Fusion Middleware Auditing Framework, BI Publisher feed such information either to a log file or to a database. If you decided to get the data into the database then, of course you know, you can use BI Publisher to report and publish, or visualize the data to gain more insights. One thing though, in order to feed the data it requires a few extra steps, which I’ll cover it later.  Regardless of whether it’s the log file or the database to store the Auditing data, first, you need to enable the Auditing feature, which is not enabled as default. So, let’s take a look at how to enable it. How to Enable Auditing Feature? Here is a quick list of the steps: Enable Auditing related properties in BI Publisher configuration file Copy component_events.xml file to Fusion Middleware Audit Framework’s location Enable Auditing Policy with Fusion Middleware Control (Enterprise Manager) Restart WebLogic Server Enable Auditing related properties in BI Publisher configuration file Open xmlp-server-config.xml file, which is located under $BI_HOME/ user_projects/domains/bifoundation_domain/config/bipublisher/repository/Admin/Configuration directory. Set the following three properties values to ‘true’. AUDIT_ENABLED MONITORING_ENABLED AUDIT_JPS_INTEGRATION The ‘AUDIT_JPS_INTEGRATION’ is not in the file as default, so you need to add this. Here is an example of how it looks for the xmlp-server-config.xml file after the modification. <?xml version="1.0" encoding="UTF-8" standalone="no"?><xmlpConfigxmlns="http://xmlns.oracle.com/oxp/xmlp"> <property name="SAW_SERVER" value="adc6160510"/> <property name="SAW_SESSION_TIMEOUT" value="90"/> <property name="DEBUG_LEVEL" value="exception"/> <property name="SAW_PORT" value="7001"/> <property name="SAW_PASSWORD" value=""/> <property name="SAW_PROTOCOL" value="http"/> <property name="SAW_VERSION" value="v6"/> <property name="SAW_USERNAME" value=""/> <property name="SAW_URL_SUFFIX" value="analytics/saw.dll"/> <property name="MONITORING_ENABLED" value="true"/> <property name="MONITORING_DEFAULT_HISTORY_SIZE" value="30"/> <property name="AUDIT_ENABLED" value="true"/> <property name="JSESSION_RESET_DISABLED" value="true"/> <property name="SECURITY_MODEL" value="ORACLE_AS_JPS"/> <property name="AUDIT_JPS_INTEGRATION" value="true"/> </xmlpConfig>   Copy component_events.xml file to Audit Framework’s location There is a Audit related configuration file provided by BI Publisher that needs to be copied to the Audit Framework location. 1. Go to the following directory. $BI_HOME /oracle_common/modules/oracle.iau_11.1.1/components 2. Create a directory called ‘xmlpserver’ 3. Copy component_events.xml file from /user_projects/domains/bifoundation_domain/config/bipublisher/repository/Admin/Audit To the newly created ‘xmlpserver’ directory. Enable Auditing Policy with Fusion Middleware Control (EM) Now you can set a level of the auditing for each BI Publisher’s auditing type by using Fusion Middleware Control (a.k.a. Enterprise Manager). 1. Login to Fusion Middleware Control UI http://hostname:port/em (e.g. reporting.oracle.com:7001/em) 2. Access to Audit Policy configuration UI from the menu Under WebLogic Domain, right-click bifoundation_domain, select Security and then click Audit Policy.   3. Set Audit Level for BI Publisher. While you can select ‘Custom’ to set a customized level of Auditing for each component, I’m selecting ‘Medium’ for this exercise.   Restart WebLogic Server After all the above settings, now you need to restart the WebLogic Server instance in order to take those changes in effect. If you’re on Windows you can simply do this by selecting ‘Stop BI Servers’ and ‘Start BI Servers’ from the Start menu. If you’re on Linux then you can run ‘stopWebLogic.sh’ and ‘startWebLogic.sh’, which can be found under $BI_HOME/user_projects/domains/bifoundation_domain/bin Start Auditing! Now assuming that you have completed the above steps successfully, then from this point on any reporting activity should be audited and stored in the auditing log file, which can be found at $BI_HOME/user_projects/domains/bifoundation_domain/servers/AdminServer/logs/auditlogs/xmlpserver/audit.log And here is a sample of the log file: 2011-02-18 02:25:49.928 "" "ReportRendering" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "200" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" "pdf" "RTF Corp Styles" "en_US" - - - - - - - - - - - - - - 86608512 486989824 24517 169 - - - 2011-02-18 02:25:49.929 "steve.jobs" "ReportRequest" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportAccess" "200" "" "" "pdf" "RTF Corp Styles" - - - true - - - - - - - - - - - - - - - - - - 2011-02-18 03:25:49.554 "" "ReportDataProcess" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "260" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" - - - - - - - - - - - - - - - - - 34980200 554033152 - 134 - - - 2011-02-18 03:25:50.282 "" "ReportRendering" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportExecution" "263" "" "/Sample Lite/Published Reporting/Reports/Balance Letter.xdo" "pdf" "RTF Corp Styles" "en_US" - - - - - - - - - - - - - - 16158944 554033152 24517 503 - - - 2011-02-18 03:25:50.282 "steve.jobs" "ReportRequest" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000022,0" - - - - "bipublisher(11.1.1)" "ReportAccess" "263" "" "" "pdf" "RTF Corp Styles" - - - true - - - - - - - - - - - - - - - - - - 2011-02-18 03:30:00.448 "barack.obama" "UserLogin" true - "82d4bdc47b99b33c:-7e3f334f:12e365c4d9c:-8000-0000000000000406,0" - - - - "bipublisher(11.1.1)" "UserSession" "26" "" - - - - - - - - - - - - - - - - - - - - - - - - - From the above log file you can tell a user ‘steve.jobs’ was running some reports like ‘Balance Letter’ around afternoon on 2/18 and another user ‘barack.obama’ logged into the system at 3:30 on the same day. Yes, every login and log out will be recorded, and every report access will be recorded in this log file. Now, looking at this text file to understand what’s going on is pretty overwhelming. And accessing to this log file, which is located at the server’s file system where the BI Publisher/WebLogic Server are running, is another challenge in typical deployment scenarios. And that’s where the database storage option for the Auditing data  comes into a picture. I’ll talk about this tomorrow, so stay tuned!  

    Read the article

  • Problem to install Apache 2.4.2 in Ubuntu 12.04

    - by Michael
    I followed these steps to install Apache 2.4.2 in Ubuntu 12.04, but it seems Apache is not installed, here's what I did (I followed the steps in this site http://www.discusswire.com/apache-2-4-installation-ubuntu/): sudo apt-get install build-essential sudo apt-get build-dep apache2 wget http://apache.mirrors.pair.com/httpd/httpd-2.4.2.tar.gz tar -xzvf httpd-2.4.2.tar.gz && cd httpd-2.4.2 sudo ./configure --prefix=/usr/local/apache2 --enable-mods-shared=all --enable-deflate --enable-proxy --enable-proxy-balancer --enable-proxy-http --with-mpm=prefork sudo make sudo make install when I tried to start by issuing sudo /usr/local/apache2/bin/apachectl start at terminal, I got the following warning: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message" and when I typed top at terminal, the apache is not there. I also tried to go to http://localhost/ or 127.0.0.1 or even 127.0.1.1 it showed "Can't establish connection to server ..." message. ps: I checked the error log and it showed "[Fri Jul 27 15:49:00.703901 2012] [proxy_balancer:emerg] [pid 20781] AH01177: Failed to lookup provider 'shm' for 'slotmem': is mod_slotmem_shm loaded?? [Fri Jul 27 15:49:00.704083 2012] [:emerg] [pid 20781] AH00020: Configuration Failed, exiting" What I'm missing? Thanks Michael

    Read the article

  • NTP service, offset increasing after sync

    - by Ajay
    I have installed Ubuntu 12.10 version on my PC. I am running NTP service having NTP server as GPS. I found that when we start NTP service by ntp start command, PC is able to sync with GPS as i get '*' symbol before GPS IP when i run ntpq -p command. This remains good for some time and then the * symbol is removed which means that PC is not synchronized to that server. Now, by running command ntpq -p it shows that all parameter are OK but as '*' is removed, slowly offset goes on increasing. remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 7 16 1 2.333 23.799 0.808 remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 14 16 3 2.333 23.799 0.879 remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 11 16 7 2.333 23.799 1.500 remote refid st t when poll reach delay offset jitter ============================================================================== *192.168.100.33 .GPS. 1 u 8 16 17 2.333 23.799 2.177 below are the last 4 ntp status when sync is lost with GPS ============================================================================== 192.168.100.33 .GPS. 1 u 1 16 377 2.404 1169.94 1.735 remote refid st t when poll reach delay offset jitter ============================================================================== 192.168.100.33 .GPS. 1 u - 16 377 2.513 1171.80 0.898 remote refid st t when poll reach delay offset jitter ============================================================================== 192.168.100.33 .GPS. 1 u 15 16 377 2.513 1171.80 0.898 Since, GPS is already available, PC never re-synchronize itself to GPS later ON. I have to restart the ntp service and then PC synchronizes to GPS and '*' symbol arrives.

    Read the article

  • Installation taking a very long time, hangs at "Configuring bcmwl-kernel-source"

    - by user290522
    I am installing Ubuntu 14.04(32-bit) on my laptop (Compaq Presario V2000), and after about 7 hours, it is still in Configuring bcmwl-kernel-source (i386) mode. The messages I read are as follows: ubuntu kernel: [22814.858163] ACPI: \_SB_.PCI0.LPC0.LPC0.ACAD: ACPI_NOTIFY_BUS_CHECK event: unsupported with the numbers in the square brackets increasing. I have had Windows XP professional on this laptop, and I am erasing it. I am not sure if I should turn off the laptop, and start all over again. About 4 years ago I installed Ubuntu on this laptop, and that was very fast. The only problem I encountered was my wireless, and could not make it to work, and switched back to Windows. I appreciate any comments regarding this installation taking such a long time. After 40 hours the installation was still in configuring mode with the following messages: ubuntu CRON[29329]: (root) CMD ( cd/ && run-part .. report /etc/cron-hourly) I did the following to check for errors: pressed ctrl-alt-f2. This time the system froze. I had no other choice but to turn off the laptop, and start all over again. The exact model of the laptop is "Compaq Presario V2069CL Notebook PC" with AMD processor.

    Read the article

  • Help, broken Gsettings

    - by Rene
    I was trying to disable the global menu as per http://ubuntuhandbook.org/index.php/2013/07/disable-global-menu-on-ubuntu-13-10-saucy/#comment-8612, but while it didn't change anything, after running the autoremove command unity-tweak-tool broke. Obviously my first reaction was to re-install the removed package but it remains broken. TBH I don't know if it is even related or just a coincidence. When I start it from the launcher it just blinks and disappear. When I start it from terminal I get this error: $ gnome-tweak-tool WARNING : Shell not installed or running WARNING : Error detecting shell Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/gtweak/tweaks/tweak_shell_extensions.py", line 199, in __init__ raise Exception("Shell not running or DBus service not available") Exception: Shell not running or DBus service not available INFO : GSettings missing key org.gnome.nautilus.desktop (key computer-icon-visible) WARNING : Shell not running None INFO : GSettings missing key org.gnome.mutter (key workspaces-only-on-primary) Segmentation fault (core dumped) I had a look with dconf-editor if I could just add the missing key, but apparently keys aren't meant to be added "by hand". So how can I fix this? I'd rather prefer not having to reinstall everything. Which package is broken, can I just reinstall that? EDIT: I found by being root gnome-tweak-tool no longer crashed so possibly a permission issue somewhere. I don't know that I changed any permissions. Another related problem, actually the reason I noticed the problem at all, is that unity-tweak-tool seem no longer to want to save the values edited. I normally just have the Unity launcher on the primary display but wanted to check what it was like having it on both. I didn't like it so I went into unity-tweak-tool to set it back - but regardless how many time I tick "only primary display" it never changes anything. What does the Unity-tweak-tool actually change and can I do this directly somehow?

    Read the article

  • Project Euler 14: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 14.  As always, any feedback is welcome. # Euler 14 # http://projecteuler.net/index.php?section=problems&id=14 # The following iterative sequence is defined for the set # of positive integers: # n -> n/2 (n is even) # n -> 3n + 1 (n is odd) # Using the rule above and starting with 13, we generate # the following sequence: # 13 40 20 10 5 16 8 4 2 1 # It can be seen that this sequence (starting at 13 and # finishing at 1) contains 10 terms. Although it has not # been proved yet (Collatz Problem), it is thought that all # starting numbers finish at 1. Which starting number, # under one million, produces the longest chain? # NOTE: Once the chain starts the terms are allowed to go # above one million. import time start = time.time() def collatz_length(n): # 0 and 1 return self as length if n <= 1: return n length = 1 while (n != 1): if (n % 2 == 0): n /= 2 else: n = 3*n + 1 length += 1 return length starting_number, longest_chain = 1, 0 for x in xrange(1, 1000001): l = collatz_length(x) if l > longest_chain: starting_number, longest_chain = x, l print starting_number print longest_chain # Slow 31 seconds print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • New training on Power Pivot with recorded video courses

    - by Marco Russo (SQLBI)
    I and Alberto Ferrari started delivering training on Power Pivot in 2010, initially in classrooms and then also online. We also recorded videos for Project Botticelli, where you can find content about Microsoft tools and services for Business Intelligence. In the last months, we produced a recorded video course for people that want to learn Power Pivot without attending a scheduled course. We split the entire Power Pivot course training in three editions, offering at a lower price the more introductive modules: Beginner: introduces Power Pivot to any user who knows Excel and want to create reports with more complex and large data structures than a single table. Intermediate: improves skills on Power Pivot for Excel, introducing the DAX language and important features such as CALCULATE and Time Intelligence functions. Advanced: includes a depth coverage of the DAX language, which is required for writing complex calculations, and other advanced features of both Excel and Power Pivot. There are also two bundles, that includes two or three editions at a lower price. Most important, we have a special 40% launch discount on all published video courses using the coupon SQLBI-FRNDS-14 valid until August 31, 2014. Just follow the link to see a more complete description of the editions available and their discounted prices. Regular prices start at $29, which means that you can start a training with less than $18 using the special promotion. P.S.: we recently launched a new responsive version of the SQLBI web site, and now we also have a page dedicated to all videos available about our sessions in conferences around the world. You can find more than 30 hours of free videos here: http://www.sqlbi.com/tv.

    Read the article

  • Go Big or Go Home

    - by user12601034
    For those who don’t know, Oracle sponsors a group called “OWL” – Oracle Women’s Leadership - and the purpose of the group is to create local and global opportunities that support, educate and empower current and future women leaders at Oracle. This week, I had the opportunity to attend the Denver OWL roadshow, and I was really impressed with the quality of speakers and interactions that I experienced. One theme that arose throughout the day was that of “Lean In.” In a nutshell, “Lean In” requires you to take advantage of the opportunities that you’re given. One of my personal mantras is “Go big or go home."  That is, if you’re not willing to give it your all, don’t do it at all. Regardless of how you phrase it, it’s a life lesson that I believe needs to be tossed in our face every so often simply, if for no other reason, to get our attention. You are given a finite amount of time in your life; in your job role; in your interactions with others. Do you make the most of the opportunities given to you every day? Or do you believe that life just happens, and you have to deal with whatever is handed to you? I have a challenge for you. Set aside any concerns or fears you have about something and Lean In. Make the most of an opportunity presented to you…or make your own opportunity! If you start with just one thing, you’ll start building a mindset to make the most of additional opportunities. Not only will you be better for leaning in, but I’m betting that those around you will be better for it as well.

    Read the article

  • Strange Misleading Error[XML -2018/ AC-10006] when doing the R12 Cloning

    - by [email protected]
    During the recent Multi Node to Single Node R12 Clone, Encountered an strange error. When doing the database portion of the clone. Below command 'adclonectx.pl' creates the Context file perl adclonectx.pl contextfile=$ORACLE_HOME/appsutil/SOURCE_CONTEXT_FILE.xml template=$ORACLE_HOME/appsutil/template/adxdbctx.tmp pairsfile=$ORACLE_HOME/appsutil/clone/pairsfile.txt initialnode   When running the same command, It dumped the below error,   file:/tmp/tmpCtxClone.xml<Line 1, Column 1>: XML-20108: (Fatal Error) Start of root element expected. AC-10006: Exception - org.xml.sax.SAXParseException: file:/tmp/tmpCtxClone.xml<Line 1, Column 1>: XML-20108: (Fatal Error) Start of root element expected. thrown while creating OAVars object for file: /tmp/tmpCtxClone.xml The new database context file has been created :   /opt/oracle/product/11.1.0_IOFT/appsutil/IOFT_frws35ta.xml   At first site, I suspected that the issue is with format of the source xml file. Hence compared with the working XML file. Result is clean. Below portion of the error struck me Thrown while creating OAVars object for file: /tmp//dummy.xml   Cause : The /tmp is 100% full.   Fix: Either remove the old files in /tmp  directory  OR  export TEMP=/new/location where there is plenty of free space.

    Read the article

  • mount another drive to the same directory

    - by Ken Autotron
    I recently purchased a server that was advertised as 2TB (2 1TB drives) in size, when I use it it reports only one of the drives, I would like to be able to use both as if one drive. here is the specs... sudo lshw -C disk *-disk description: ATA Disk product: TOSHIBA DT01ACA1 vendor: Toshiba physical id: 0.0.0 bus info: scsi@1:0.0.0 logical name: /dev/sda version: MS2O serial: 13EJ81XPS size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=0005b3dd *-disk description: ATA Disk product: TOSHIBA DT01ACA1 vendor: Toshiba physical id: 0.0.0 bus info: scsi@4:0.0.0 logical name: /dev/sdb version: MS2O serial: 13OX3TKPS size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=00030e86 and fdisk -l Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00030e86 Device Boot Start End Blocks Id System /dev/sdb1 * 4096 41947135 20971520 fd Linux raid autodetect /dev/sdb2 41947136 1952468991 955260928 fd Linux raid autodetect /dev/sdb3 1952468992 1953519615 525312 82 Linux swap / Solaris Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0005b3dd Device Boot Start End Blocks Id System /dev/sda1 * 4096 41947135 20971520 fd Linux raid autodetect /dev/sda2 41947136 1952468991 955260928 fd Linux raid autodetect /dev/sda3 1952468992 1953519615 525312 82 Linux swap / Solaris Disk /dev/md2: 978.2 GB, 978187124736 bytes 2 heads, 4 sectors/track, 238815216 cylinders, total 1910521728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/md2 doesn't contain a valid partition table Disk /dev/md1: 21.5 GB, 21474770944 bytes 2 heads, 4 sectors/track, 5242864 cylinders, total 41942912 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/md1 doesn't contain a valid partition table is it possible to mount both drives to say /Home/ so I would have 2TB of usable space?

    Read the article

  • Problem with WCF-SQL Adapter

    - by Paul Petrov
    When using WCF receive adapter with SQL binding in Polling mode please be aware of the following problem. Problem: At some regular but seemingly random intervals the application stops processing new requests, places a lock on the database and prevent other application from accessing it. Initially it looked like DTC issue, as it was distributed transaction that stalled most of the time. Symptoms: Orchestration instances in Dehydrated state, receive location not picking up new messages, exclusive locks on database tables, errors in DTC trace. Cause: Microsoft has confirmed that there is a bug in the WCF-SQL adapter. In the receive adapter binding configuration there's receiveTimeout property set to 10 minutes by default. If during this period data is not found in the table the adapter would start new thread and allocate more memory without releasing old resources. Thus if there's no new data in the table for a long time a new thread will be created in the host instance every 10 minutes until it reaches threshold (1000) and then there's no threads left for this host instance and it can't start/complete any tasks. Then this host instance won't be able to do anything. If other artifacts are hosted in the instance they will suffer consequences as well. Solution: - Set receiveTimeout to the maximum time 24.20:31:23.6470000. - Place WCF-SQL receive locations in separate host to provide its own thread pool and eliminate impact on other processes - Ensure WCF-SQL dedicated host instances are restarted at interval less or equal to receiveTimeout to flush threads and memory - Monitor performance counters Process/Thread Count/BTSNTSvc{n} for thread count trend and respond to alert if it grows by restarting host instance If you use WCF-SQL Adapter in the Notification mode then make sure to remove sqlAdapterInboundTransactionBehavior otherwise this location will exhibit the same issue. In this case though, setting receiveTimeout doesn't help and new thread will be created at default intervals (10 min) ignoring maximum setting.

    Read the article

  • Unable to configure a service to run at startup with update-rc.d

    - by ujjain
    I would like to have transmission-daemon and vnstat automatically run at startup. I was able to configure this for apache2 and proftpd with exactly the same commands. 795 sudo update-rc.d transmission-daemon remove 797 sudo update-rc.d -f transmission-daemon remove 798 sudo update-rc.d transmission-daemon defaults 799 sudo update-rc.d vnstat remove 800 sudo update-rc.d -f vnstat remove 801 sudo update-rc.d -f vnstat defaults 802 sudo update-rc.d -f vnstat enable 805 reboot 807 history root@htpc:/home/administrator# sudo update-rc.d -f transmission-daemon remove Removing any system startup links for /etc/init.d/transmission-daemon ... /etc/rc0.d/K20transmission-daemon /etc/rc1.d/K20transmission-daemon /etc/rc2.d/S20transmission-daemon /etc/rc3.d/S20transmission-daemon /etc/rc4.d/S20transmission-daemon /etc/rc5.d/S20transmission-daemon /etc/rc6.d/K20transmission-daemon root@htpc:/home/administrator# sudo update-rc.d -f transmission-daemon defaults Adding system startup for /etc/init.d/transmission-daemon ... /etc/rc0.d/K20transmission-daemon -> ../init.d/transmission-daemon /etc/rc1.d/K20transmission-daemon -> ../init.d/transmission-daemon /etc/rc6.d/K20transmission-daemon -> ../init.d/transmission-daemon /etc/rc2.d/S20transmission-daemon -> ../init.d/transmission-daemon /etc/rc3.d/S20transmission-daemon -> ../init.d/transmission-daemon /etc/rc4.d/S20transmission-daemon -> ../init.d/transmission-daemon /etc/rc5.d/S20transmission-daemon -> ../init.d/transmission-daemon root@htpc:/home/administrator# service vnstat status * vnStat daemon is not running root@htpc:/home/administrator# service transmission-daemon status * transmission-daemon is not running root@htpc:/home/administrator# service transmission-daemon start * Starting bittorrent daemon transmission-daemon [ OK ] root@htpc:/home/administrator# service vnstat start * Starting vnStat daemon vnstatd [ OK ] root@htpc:/home/administrator# service apache2 status Apache2 is running (pid 1137). root@htpc:/home/administrator#

    Read the article

  • Fix invalid objects and components - BEFORE you upgrade!

    - by Mike Dietrich
    We are currently running a Tech Challange Workshop with 25 Oracle consultants and support folks from all over EMEA. We call it Tech Challange because we seperate these experts having between 5 and 20 years of Oracle experience into 5 groups - and each group has to complete their special challange such as moving a database from 10.2 to Exadata V2 or upgrading from single instance 10.2 to Real Application Clusters 11.2 with the new Grid Infrastructure. Actually we start this training with a bit presentation pieces about upgrades, Real Application Testing and Golden Gate. And one topic I always point out: Keep your database tidy before the upgrade!!! Clean up all invalid objects - especially in SYS and SYSTEM user schema BEFORE you upgrade. Use utlrp.sql to recompile invalid objects. Use Note:753041.1 to diagnose and fix invalid components. Do this always BEFORE you start the upgrade. Even if it may take some time. Otherwise your upgrade could fails or significant parts of the database packages could be invalid after the upgrade as well. I just came across this today as one group had ~240 invalid objects in the database - and due to the fact that the original system was still there could proof that the objects had been invalid before. Good job, BUT ... :-)

    Read the article

  • Unable to Install VirtualBox Due to Missing Kernel Module

    - by SoftTimur
    I am trying to install VirtualBox on my Ubuntu. I first tried to sudo apt-get install virtualbox-ose in a terminal, but after the configuration step, it fails with an error: No suitable module for running kernel found When proceeding with starting virtualbox, I get this error: WARNING: The character device /dev/vboxdrv does not exist. Please install the virtualbox-ose-dkms package and the appropriate headers, most likely linux-headers-generic. You will not be able to start VMs until this problem is fixed. So I tried the package from http://www.virtualbox.org/, but starting VirtualBox fails with: WARNING: The vboxdrv kernel module is not loaded. Either there is no module available for the current kernel (2.6.38-8-generic-pae) or it failed to load. Please recompile the kernel module and install it by sudo /etc/init.d/vboxdrv setup You will not be able to start VMs until this problem is fixed. So I ran sudo /etc/init.d/vboxdrv setup, but it fails too: * Stopping VirtualBox kernel modules [ OK ] * Uninstalling old VirtualBox DKMS kernel modules [ OK ] * Trying to register the VirtualBox kernel modules using DKMS Error! Your kernel headers for kernel 2.6.38-8-generic-pae cannot be found at /lib/modules/2.6.38-8-generic-pae/build or /lib/modules/2.6.38-8-generic-pae/source. * Failed, trying without DKMS * Recompiling VirtualBox kernel modules * Look at /var/log/vbox-install.log to find out what went wrong The contents of /var/log/vbox-install.log. As I am stuck, I also tried to install kernel-devel with yum, still fruitless: root@ubuntu# yum install kernel-devel Setting up Install Process No package kernel-devel available. Nothing to do Now I've no idea how to correct this. Any ideas?

    Read the article

  • Asus Eee PC 1000HE wireless woes

    - by Vladimir Noobokov
    Ever since I have upgraded my Asus Eee PC 1000HE from Lucid 10.04 to Precise 12.04 I have been having issues with my wireless connections. At first I had wireless dropouts: I would be able to start using wireless, but then after a few minutes the wireless would stop working even though I was still connected to the network. Lately things turned worse: while I connect to my wireless network, it just never works. I tried all sorts of solutions on offer here and in other forums but none worked. At best I got the wireless to work up until I rebooted, at which point I would get the same symptoms again: the wireless network is there, but it's not really working. By now I tried so many different "solutions" I don't know where to start describing them; I have also reinstalled 12.04 several times, enough to make me lose faith in Ubuntu. Help here looks like my last resort. For the record, my Asus Eee PC 1000HE is equipped with an Atheros wireless card. I have reinstalled 12.04, ran all the suggested updates, and receive the following response when I type iwconfig in the terminal: lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"Arsenal" Mode:Managed Frequency:2.452 GHz Access Point: 00:04:ED:48:67:89 Bit Rate=1 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=70/70 Signal level=-29 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:27 Invalid misc:57 Missed beacon:0 eth0 no wireless extensions. Thanks in advance for any help that might be offered.

    Read the article

  • Multithreading in Windows Phone 7 emulator: A bug

    - by Laurent Bugnion
    Multithreading is supported in Windows Phone 7 Silverlight applications, however the emulator has a bug (which I discovered and was confirmed to me by the dev lead of the emulator team): If you attempt to start a background thread in the MainPage constructor, the thread never starts. The reason is a problem with the emulator UI thread which doesn’t leave any time to the background thread to start. Thankfully there is a workaround (see code below). Also, the bug should be corrected in a future release, so it’s not a big deal, even though it is really confusing when you try to understand why the *%&^$£% thread is not &$%&%$£ starting (that was me in the plane the other day ;) This code does not work: public partial class MainPage : PhoneApplicationPage { public MainPage() { InitializeComponent(); SupportedOrientations = SupportedPageOrientation.Portrait | SupportedPageOrientation.Landscape; var counter = 0; ThreadPool.QueueUserWorkItem(o => { while (true) { Dispatcher.BeginInvoke(() => { textBlockListTitle.Text = (counter++).ToString(); }); } }); } } This code does work: public MainPage() { InitializeComponent(); SupportedOrientations = SupportedPageOrientation.Portrait | SupportedPageOrientation.Landscape; var counter = 0; ThreadPool.QueueUserWorkItem(o => { while (true) { Dispatcher.BeginInvoke(() => { textBlockListTitle.Text = (counter++).ToString(); }); // NOTICE THIS LINE!!! Thread.Sleep(0); } }); } Note that even if the thread is started in a later event (for example Click of a Button), the behavior without the Thread.Sleep(0) is not good in the emulator. As of now, i would recommend always sleeping when starting a new thread. Happy coding: Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Preventing battery from charging

    - by intuited
    I'm running on UPS power and would like to prevent the laptop's battery from charging, to increase the amount of power available to other devices. Is there a way to do this? update The machine is a Dell Latitude D400. If people want more details, just ask. Also, I'm gathering that I need to explain my desired setup a little better. I've gotten a bunch of suggestions about taking the battery out. I'm not sure if people are suggesting to take the battery out while the machine is running — this, as I understand, is not a good idea with most laptops — or to just remove the battery altogether. The latter option is not optimal, because ideally I'd like to use the 30-60 minutes of power in the laptop battery and then switch over to UPS power. The details of the switch-over may constitute a separate question, but if I can't find a way to keep the laptop battery from charging, then removing the battery from the machine altogether may be the best way to do this. I'm not sure yet if this machine will run without a battery, but I'll check that out. Other than the laptop, the UPS is just supporting a cable modem and router and a USB hub. Again in the idealized version of this setup, all the power management changes would be automated, i.e. not require replugging anything or pressing Fn-keys. I'd like the machine to start using laptop battery power when apcupsd indicates that the UPS A/C is out, and then start using UPS power, but not charging the battery, when the battery is almost depleted.

    Read the article

  • How to add locale aware CSS to an individual component in ADF Faces.

    - by [email protected]
    When creating a skin in ADF Faces, it's (relatively) easy to add locale aware CSS using the :rtl psuedo-class on the end of a skinning key.  Example:af|inputListOfValues::content {  padding-left: 3px;}af|inputListOfValues::content:rtl {  padding-left: 0px;  padding-right: 3px;}In this example, we want some padding before the start of the text in the content element of the component.  In right to left locales, the start of the text is on the right side by default, so we need to change the padding from the left to the right.Let's say, however, that you want to specify a locale aware CSS style on an individual component using the contentStyle attribute.  There is a handy ADF Faces EL function to help you out here: isRTL.  For our example, let's say we want an inputText component whose text content is 'end' aligned.  If you weren't considering RTL locales, you could code this as:<af:inputText id="idInputTextRight" label="right aligned" value="Test"                    contentStyle="text-align: right;"/>This, however, will be right aligned regardless of locale. This is where isRTL() comes to the rescue.  This is how we would code this to be locale aware:<af:inputText id="idInputTextEnd" label="end aligned" value="Test"                     contentStyle="text-align: #{af:isRTL()?'left':'right'};"/> The af:isRTL() EL function returns true if we are rendering in RTL, so we can use it to pick the appropriate text alignment.

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • The lifecycle of "cool"

    - by Dori
    I've been thinking lately about how some programming projects/products become "cool," and in particular, how that trend can later reverse. Here are two examples that might better explain my context: Textmate Whenever someone asks about text editors on OS X, the answer on the SE sites is an automatic "Textmate!" But looked at objectively: Textmate 1.0 shipped October 2004 Textmate 1.5 shipped January 2006 Textmate 2 was announced February 2006 As of September 2010, the currently shipping version is 1.5.9 In all of 2010, there have been a total of three posts on the Textmate blog At what point (if ever) do Textmate fans start thinking about switching to another text editor? When it breaks after some future Apple update? When alpha geeks they respect start recommending something else? Or? jQuery Whenever a JavaScript-related question is asked on the SE sites, the knee-jerk response is "jQuery!" I've seen it happen even when the question itself only required a single line of JavaScript. Or when the question could be better answered by using CSS. Do the answerers understand they're suggesting a blowtorch to light a candle? That they're recommending adding 70K or so of code to do something trivial? Or is it a symptom of "When you have a hammer, everything looks like a nail"—that is, jQuery is all they know how to do, so that's their recommendation? And do they understand that while they may know jQuery well, that doesn't necessarily mean that they know JavaScript? Is there a way to explain that learning JavaScript would make them better jQuery programmers? My bigger-picture questions: Is this niche focus primarily a trait of programmers? How do you get programmers to not immediately jump to recommending their personal favorites? What can motivate programmers to review their initial selection criteria and possibly modify their choice? Your thoughts?

    Read the article

  • Partner Webcast: Implementing on SOA - A Hands-On Technology Demonstration

    - by Thanos
    Service Oriented Architecture enables organizations to operate more efficiently and react faster to opportunities. How? By helping you create a flexible application architecture that supports greater business agility. You decide how quickly you want to move. You can start by implementing an application integration platform. Then, you can evolve your environment gradually by introducing business process management, business rules, governance and event processing. This unified but flexible approach also allows you to maximize the long-term cost reduction benefits of SOA and cloud-based applications. In this session, you dive into SOA Suite and you will see the usage of some advanced features. The topics covered range from adapters, automatic and custom business process correlation through service routing, rule based and manual decisions and to error handling, compensations and extending SOA Suite with your own Java code. Agenda: Service Oriented Architecture The Auctions Scenario Live Demo of the Oracle SOA Suite Features Connecting to non service enabled technologies with adapters (Database and File adapter) Orchestrating services with BPEL processes Correlating processes with correlation sets Mediating services Service Component Architecture Event Handling User Notification Human Workflow Business Rules Fault Handling patterns Developing custom components with Spring and using them in SOA Suite composites Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now For all your questions and support requests to adopt and implement the latest Oracle technologies please contact us at [email protected]

    Read the article

  • Since a few days, Dropbox stopped syncing

    - by Fabio
    Theres a new problem with nautilus-dropbox and is the worst for a service like Dropbox: stopped syncing and their tech support don't believe their users. The problem is: start dropbox it downloads index creates folders start to download files stop downloading files or the speed is like 0.3k/seg even deleting everything doesn't work, even syncing a single folder with a few files, not even configuring a different limit speed or let all your bandwith, nothing works for me and i have found many users with the same problem. it's strange, i have two computers with ubuntu 14.04 and one with windows 7, and in one of the linux machines it works fine, but not in the other one. Both of them are x64. The answers from the tech support starts with "delete everything and reinstall" to "you may be blocking port 443", yeah, sure (and if i do that half of internet dissapears, idiot), but nothing works. Someone has some idea? i cant find any log file (at least a text log file like in /var/log) to understand what is dropbox doing and what is not working. PS: sorry for my "english", is not my language Examples: dropbox slow on ubuntu 14.04 Dropbox does not sync on ubuntu 14.04 Dropbox Status (in spanish but is the same thing) fabio@canopus:~$ dropbox status Sincronizando (Quedan 330 archivos., Faltan 9 días.) Descargando 330 archivos... (0,3 KB/s, Faltan 9 días.) 9 days for 330 files / 100Mb

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >