Search Results

Search found 3353 results on 135 pages for 'volume activation'.

Page 61/135 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Disabling email-style usernames in Django 1.2 with django-registration

    - by shacker
    Django 1.2 allows usernames to take the form of an email address. Changed in Django 1.2: Usernames may now contain @, +, . and - characters I know that's a much-requested feature, but what if you don't want the new behavior? It makes for messy usernames in profile URLs and seems to break django-registration (if a user registers an account with an email-style username, the link in the django-registration activation email returns 404). Does anyone have a recipe for restoring the old behavior and disabling email-style usernames?

    Read the article

  • Methods to stop Software Piracy ?

    - by UK
    I am great fan of open source technologies. I have seen lot of sites which offers pirated software's.My question is , Suppose If you are software developer and wants to sell your product and stop piracy of your products which are all the techniques or methods you use ? Is there any standard software's or standard techniques available ? The best example is Microsoft Windows WPA activation .

    Read the article

  • QTreeWidget activate item signals

    - by serge
    Hi everyone, I need to do some actions when item in QTreeWidget activates, but following code doestn't gives me expected result: class MyWidget(QTreeWidget): def __init__(self, parent=None): super(MyWidget, self).__init__(parent) self.connect(self, SIGNAL("activated(QModelIndex)"), self.editCell) def editCell(self, index): print index or class MyWidget(QTreeWidget): def __init__(self, parent=None): super(MyWidget, self).__init__(parent) self.connect(self, SIGNAL("itemActivated(QTreeWidgetItem, int)"), self.editCell) def editCell(self, item, column=0): print item What am i doing wrong or how to hadnle item activation in the right way? Thanks in advance, Serge

    Read the article

  • Zend_DB_Table Update problem

    - by davykiash
    Am trying to construct a simple update query in my model class Model_DbTable_Account extends Zend_Db_Table_Abstract { protected $_name = 'accounts'; public function activateaccount($activationcode) { $data = array( 'accounts_status' => 'active', ); $this->update($data, 'accounts_activationkey = ' . $activationcode); } However I get an SQLSTATE[42S22]: Column not found: 1054 Unknown column 'my activation code value' in 'where clause' error. What am I missing in Zend_Table update construct?

    Read the article

  • Ant build classpath jar generates "error in opening zip file"

    - by Uberpuppy
    I have a project built in eclipse with a dependencies on 3rd party jars. I'm trying to generate a suitable build file for ant - using eclipses built-in export-ant buildfile feature as a starting block. When I run the build target I get the following error: [javac] error: error reading /base/repo/FabTrace/lib/apache/geronimo/specs/geronimo-j2ee-management_1.0_spec/1.0/geronimo-j2ee-management_1.0_spec-1.0.jar; error in opening zip file And the whole build file (auto-generated by eclipse) looks like this: (NB: the error above always references the first jar listed in the classpath) <project basedir="." default="build" name="FabTrace"> <property environment="env"/> <property name="ECLIPSE_HOME" value="/opt/apps/eclipse"/> <property name="debuglevel" value="source,lines,vars"/> <property name="target" value="1.5"/> <property name="source" value="1.5"/> <path id="JUnit 4.libraryclasspath"> <pathelement location="${ECLIPSE_HOME}/plugins/org.junit4_4.5.0.v20090824/junit.jar"/> <pathelement location="${ECLIPSE_HOME}/plugins/org.hamcrest.core_1.1.0.v20090501071000.jar"/> </path> <path id="FabTrace.classpath"> <pathelement location="bin"/> <pathelement location="lib/apache/geronimo/specs/geronimo-j2ee-management_1.0_spec/1.0/geronimo-j2ee-management_1.0_spec-1.0.jar"/> <pathelement location="lib/apache/geronimo/specs/geronimo-jms_1.1_spec/1.0/geronimo-jms_1.1_spec-1.0.jar"/> <pathelement location="lib/commons-collections/commons-collections/3.2/commons-collections-3.2.jar"/> <pathelement location="lib/commons-io/commons-io/1.4/commons-io-1.4.jar"/> <pathelement location="lib/commons-lang/commons-lang/2.1/commons-lang-2.1.jar"/> <pathelement location="lib/commons-logging/commons-logging/1.1/commons-logging-1.1.jar"/> <pathelement location="lib/commons-logging/commons-logging-api/1.1/commons-logging-api-1.1.jar"/> <pathelement location="lib/javax/activation/activation/1.1/activation-1.1.jar"/> <pathelement location="lib/javax/jms/jms/1.1/jms-1.1.jar"/> <pathelement location="lib/javax/mail/mail/1.4/mail-1.4.jar"/> <pathelement location="lib/javax/xml/bind/jaxb-api/2.1/jaxb-api-2.1.jar"/> <pathelement location="lib/javax/xml/stream/stax-api/1.0-2/stax-api-1.0-2.jar"/> <pathelement location="lib/junit/junit/4.4/junit-4.4.jar"/> <pathelement location="lib/log4j/log4j/1.2.15/log4j-1.2.15.jar"/> <pathelement location="lib/apache/camel/camel-jms-2.0-M1.jar"/> <pathelement location="lib/spring/spring-2.5.6.jar"/> <pathelement location="lib/apache/camel/camel-bundle-2.0-M1.jar"/> <pathelement location="lib/backport-util-concurrent/backport-util-concurrent-3.1.jar"/> <pathelement location="lib/commons-pool/commons-pool-1.4.jar"/> <pathelement location="lib/apache/camel/camel-activemq-1.1.0.jar"/> <pathelement location="lib/apache/activemq/activemq-camel-5.2.0.jar"/> <pathelement location="lib/jencks/jencks-2.2-all.jar"/> <pathelement location="lib/jencks/jencks-amqpool-2.2.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/activemq-all-5.3.1.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/optional/xbean-spring-3.6.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/activemq-core-5.3.1.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/camel-jetty-2.2.0.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/web/jetty-6.1.9.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/web/jetty-util-6.1.9.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/web/jetty-xbean-6.1.9.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/optional/activemq-optional-5.3.1.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/web/geronimo-servlet_2.5_spec-1.2.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/optional/spring-beans-2.5.6.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/optional/spring-context-2.5.6.jar"/> <pathelement location="lib/activemq/apache-activemq-5.3.1/lib/optional/spring-core-2.5.6.jar"/> <path refid="JUnit 4.libraryclasspath"/> </path> <target name="init"> <mkdir dir="bin"/> <copy includeemptydirs="false" todir="bin"> <fileset dir="src/main/java"> <exclude name="**/*.launch"/> <exclude name="**/*.java"/> </fileset> </copy> <copy includeemptydirs="false" todir="bin"> <fileset dir="src/test/java"> <exclude name="**/*.launch"/> <exclude name="**/*.java"/> </fileset> </copy> <copy includeemptydirs="false" todir="bin"> <fileset dir="config"> <exclude name="**/*.launch"/> <exclude name="**/*.java"/> </fileset> </copy> </target> <target name="clean"> <delete dir="bin"/> </target> <target depends="clean" name="cleanall"/> <target depends="build-subprojects,build-project" name="build"/> <target name="build-subprojects"/> <target depends="init" name="build-project"> <echo message="${ant.project.name}: ${ant.file}"/> <javac debug="true" debuglevel="${debuglevel}" destdir="bin" source="${source}" target="${target}"> <src path="src/main/java"/> <classpath refid="FabTrace.classpath"/> </javac> <javac debug="true" debuglevel="${debuglevel}" destdir="bin" source="${source}" target="${target}"> <src path="src/test/java"/> <classpath refid="FabTrace.classpath"/> </javac> <javac debug="true" debuglevel="${debuglevel}" destdir="bin" source="${source}" target="${target}"> <src path="config"/> <classpath refid="FabTrace.classpath"/> </javac> </target> </project> (I know there's eclipse specific stuff in here. But I get the same results with or without it.) I've done ye old google search and trawled around without success. I can confirm that all the jars do really exist. I've also tried from the commandline and as sudo - again, same results. Any help would be greatly appreciated. Cheers

    Read the article

  • SharePoint and Log4Net

    - by Nico
    I'm looking for best practices to integrate log4net to SharePoint for web request, feature activation and all timer stuff. I have several subprojects in my farm, and I would like to have only one Log4Net.config file. [Edit] Not only I need to configure log4net for the web application, which is easy to do (I use global.asax, and a log4net.config file, so I can modify log settings withtout reloading the webapp), but I also need to log asynchronous events: Event Handler (like ItemAdded) Timer Jobs ...

    Read the article

  • service broker message process order

    - by Blootac
    Everywhere I read says that messages handled by the service broker are processed in the order that they arrive, and yet if you create a table, message type, contract, service etc , and on activation have a stored proc that waits for 2 seconds and inserts the msg into a table, set the max queue readers to 5 or 10, and send 20 odd messages I can see in the table that they are inserted out of order even though when I insert them into the queue and look at the contents of the queue I can see that the messages are all in the right order. Is it due to the delay waitfor waiting for the nearest second and each thread having different subsecond times and then fighting for a lock or something? The reason i've got a delay in there is to simulate delays with joins etc Thanks demo code: --create the table and service broker CREATE TABLE test ( id int identity(1,1), contents varchar(100) ) CREATE MESSAGE TYPE test CREATE CONTRACT mycontract ( test sent by initiator ) GO CREATE PROCEDURE dostuff AS BEGIN DECLARE @msg varchar(100); RECEIVE TOP (1) @msg = message_body FROM myQueue IF @msg IS NOT NULL BEGIN WAITFOR DELAY '00:00:02' INSERT INTO test(contents)values(@msg) END END GO ALTER QUEUE myQueue WITH STATUS = ON, ACTIVATION ( STATUS = ON, PROCEDURE_NAME = dostuff, MAX_QUEUE_READERS = 10, EXECUTE AS SELF ) create service senderService on queue myQueue ( mycontract ) create service receiverService on queue myQueue ( mycontract ) GO --********************************************************** --now insert lots of messages to the queue DECLARE @dialog_handle uniqueidentifier BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>1</test>'); BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>2</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>3</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>4</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>5</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>6</test>') BEGIN DIALOG @dialog_handle FROM SERVICE senderService TO SERVICE 'receiverService' ON CONTRACT mycontract; SEND ON CONVERSATION @dialog_handle MESSAGE TYPE test ('<test>7</test>')

    Read the article

  • Ext js - Dynamically changing content of Tab in a TabPanel

    - by 29er
    Hey, I have an (Ext JS) tab panel where the hidden tabs aren't loaded at all upon initial instantiation, (the only thing I set is the title). Upon 'activation' of a tab I want to call a method , which then instanstiates a new FormPanel/GridPanel and put this content into the tab. Can someone point me to a code example or give me tips on how to do this?? Thanks so much!

    Read the article

  • Maven. How to include specific folder or file when assemblying project depending on is it dev build or production?

    - by user563588
    Using maven-assembly-plugin <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.1</version> <configuration> <descriptors> <descriptor>descriptor.xml</descriptor> </descriptors> <finalName>xxx-impl-${pom.version}</finalName> <outputDirectory>target/assembly</outputDirectory> <workDirectory>target/assembly/work</workDirectory> </configuration> in descriptor.xml file we can specify <fileSets> <fileSet> <directory>src/install</directory> <outputDirectory>/</outputDirectory> </fileSet> </fileSets> Is it possible to include specific file from this folder or sub-folder depending on profile? Or some other way... Like this: <profiles> <profile> <id>dev</id> <activation> <activeByDefault>false</activeByDefault> </activation> <build> <resources> <resource> <directory>src/install/dev</directory> <includes> <include>**/*</include> </includes> </resource> </resources> </build> </profile> <profile> <id>prod</id> <build> <resources> <resource> <directory>src/install/prod</directory> <includes> <include>**/*</include> </includes> </resource> </resources> </build> </profile> </profiles> But it puts resources in jar when packaging. But we need to put it in zip when assemblying as I already mentioned above :( Thanks!

    Read the article

  • Maven. How to include specific folder or file when assemblying project depending on is it dev build or production?

    - by user563588
    Using maven-assembly-plugin <plugin> <artifactId>maven-assembly-plugin</artifactId> <version>2.1</version> <configuration> <descriptors> <descriptor>descriptor.xml</descriptor> </descriptors> <finalName>xxx-impl-${pom.version}</finalName> <outputDirectory>target/assembly</outputDirectory> <workDirectory>target/assembly/work</workDirectory> </configuration> in descriptor.xml file we can specify <fileSets> <fileSet> <directory>src/install</directory> <outputDirectory>/</outputDirectory> </fileSet> </fileSets> Is it possible to include specific file from this folder or sub-folder depending on profile? Or some other way... Like this: <profiles> <profile> <id>dev</id> <activation> <activeByDefault>false</activeByDefault> </activation> <build> <resources> <resource> <directory>src/install/dev</directory> <includes> <include>**/*</include> </includes> </resource> </resources> </build> </profile> <profile> <id>prod</id> <build> <resources> <resource> <directory>src/install/prod</directory> <includes> <include>**/*</include> </includes> </resource> </resources> </build> </profile> </profiles> But it puts resources in jar when packaging. But we need to put it in zip when assemblying as I already mentioned above :( Thanks!

    Read the article

  • Can't send e-mail to hotmail addresses

    - by FinalFrag
    I recently did a mailing to a lot of hotmail addresses and now all the e-mails I send out (from PHP by the way) don't seem to reach their destination. So I figure hotmail banned my url... Is there any way to get rid of this ban so my e-mails will reach the users again (like activation e-mails etc)?

    Read the article

  • Secure php code for copyright

    - by cosy
    I have an eCommerce platform, and i wand to secure it. How can it be possible ? I don't want to somebody copy my code. Like a license for 1 year with a code for activation. Or somethings like this. Sorry my English, thanks a lot!

    Read the article

  • E-mail verification in wordpress

    - by Sanjai Palliyil
    Hi All....I am using register-plus plugin for registration purpose in my wordpress site. I have enalbled E-mail verifiacation whereby user will be getting an activation link. What i want to do is when the user clicks the link, i want the user to be enabled automatically.......currently admin has to login to the system and verify the users for the new user to login.....how do i achieve my task ...ANy help is appreciated

    Read the article

  • Windows CE: Using IOCTL_DISK_GET_STORAGEID

    - by Bruce Eitman
    A customer approached me recently to ask if I had any code that demonstrated how to use STORAGE_IDENTIFICATION, which is the data structure used to get the Storage ID from a disk. I didn’t have anything, which of course sends me off writing code and blogging about it. Simple enough, right? Go read the documentation for STORAGE_IDENTIFICATION which lead me to IOCTL_DISK_GET_STORAGEID. Except that the documentation for IOCTL_DISK_GET_STORAGEID seems to have a problem.   The most obvious problem is that it shows how to call CreateFile() to get the handle to use with DeviceIoControl(), but doesn’t show how to call DeviceIoControl(). That is odd, but not really a problem. But, the call to CreateFile() seems to be wrong, or at least it was in my testing. The documentation shows the call to be: hVolume = CreateFile(TEXT("\Storage Card\Vol:"), GENERIC_READ|GENERIC_WRITE, 0, NULL, OPEN_EXISTING, 0, NULL); I tried that, but my testing with an SD card mounted as Storage Card failed on the call to CreateFile(). I tried several variations of this, but none worked. Then I remembered that some time ago I wrote an article about enumerating the disks (Windows CE: Displaying Disk Information). I pulled up that code and tried again with both the disk device name and the partition volume name. The disk device name worked. The device names are DSKx:, where x is the disk number. I created the following function to output the Manufacturer ID and Serial Number returned from IOCTL_DISK_GET_STORAGEID:   #include "windows.h" #include "Diskio.h"     BOOL DisplayDiskID( TCHAR *Disk ) {                 STORAGE_IDENTIFICATION *StoreID = NULL;                 STORAGE_IDENTIFICATION GetSizeStoreID;                 DWORD dwSize;                 HANDLE hVol;                 TCHAR VolumeName[MAX_PATH];                 TCHAR *ManfID;                 TCHAR *SerialNumber;                 BOOL RetVal = FALSE;                 DWORD GLE;                   // Note that either of the following works                 //_stprintf(VolumeName, _T("\\%s\\Vol:"), Disk);                 _stprintf(VolumeName, _T("\\%s"), Disk);                   hVol = CreateFile( Disk, GENERIC_READ|GENERIC_WRITE, 0, NULL, OPEN_EXISTING, 0, NULL);                   if( hVol != INVALID_HANDLE_VALUE )                 {                                 if(DeviceIoControl(hVol, IOCTL_DISK_GET_STORAGEID, (LPVOID)NULL, 0, &GetSizeStoreID, sizeof(STORAGE_IDENTIFICATION), &dwSize, NULL) == FALSE)                                 {                                                 GLE = GetLastError();                                                 if( GLE == ERROR_INSUFFICIENT_BUFFER )                                                 {                                                                 StoreID = (STORAGE_IDENTIFICATION *)malloc( GetSizeStoreID.dwSize );                                                                 if(DeviceIoControl(hVol, IOCTL_DISK_GET_STORAGEID, (LPVOID)NULL, 0, StoreID, GetSizeStoreID.dwSize, &dwSize, NULL) != FALSE)                                                                 {                                                                                 RETAILMSG( 1, (TEXT("DisplayDiskID: Flags %X\r\n"), StoreID->dwFlags ));                                                                                 if( !(StoreID->dwFlags & MANUFACTUREID_INVALID) )                                                                                 {                                                                                                 ManfID = (TCHAR *)((DWORD)StoreID + StoreID->dwManufactureIDOffset);                                                                                                 RETAILMSG( 1, (TEXT("DisplayDiskID: Manufacture ID %s\r\n"), ManfID ));                                                                                 }                                                                                 if( !(StoreID->dwFlags & SERIALNUM_INVALID) )                                                                                 {                                                                                                 SerialNumber = (TCHAR *)((DWORD)StoreID + StoreID->dwSerialNumOffset);                                                                                                 RETAILMSG( 1, (TEXT("DisplayDiskID: Serial Number %s\r\n"), SerialNumber ));                                                                                 }                                                                                 RetVal = TRUE;                                                                 }                                                                 else                                                                                 RETAILMSG( 1, (TEXT("DisplayDiskID: DeviceIoControl failed (%d)\r\n"), GLE));                                                                                                                                                 free(StoreID);                                                 }                                                 else                                                                 RETAILMSG( 1, (TEXT("No Disk Identifcation available for %s\r\n"), VolumeName ));                                 }                                 else                                                 RETAILMSG( 1, (TEXT("DisplayDiskID: DeviceIoControl succeeded (and shouldn't have)\r\n")));                                                                                 CloseHandle (hVol);                 }                 else                                 RETAILMSG( 1, (TEXT("DisplayDiskID: Failed to open volume (%s)\r\n"), VolumeName ));                   return RetVal; } Further testing showed that both \DSKx: and \DSKx:\Vol: work when calling CreateFile();   Copyright © 2010 – Bruce Eitman All Rights Reserved

    Read the article

  • DCOM Authentication Fails to use Kerberos, Falls back to NTLM

    - by Asa Yeamans
    I have a webservice that is written in Classic ASP. In this web service it attempts to create a VirtualServer.Application object on another server via DCOM. This fails with Permission Denied. However I have another component instantiated in this same webservice on the same remote server, that is created without problems. This component is a custom-in house component. The webservice is called from a standalone EXE program that calls it via WinHTTP. It has been verified that WinHTTP is authenticating with Kerberos to the webservice successfully. The user authenticated to the webservice is the Administrator user. The EXE to webservice authentication step is successful and with kerberos. I have verified the DCOM permissions on the remote computer with DCOMCNFG. The default limits allow administrators both local and remote activation, both local and remote access, and both local and remote launch. The default component permissions allow the same. This has been verified. The individual component permissions for the working component are set to defaults. The individual component permissions for the VirtualServer.Application component are also set to defaults. Based upon these settings, the webservice should be able to instantiate and access the components on the remote computer. Setting up a Wireshark trace while running both tests, one with the working component and one with the VirtualServer.Application component reveals an intresting behavior. When the webservice is instantiating the working, custom, component, I can see the request on the wire to the RPCSS endpoint mapper first perform the TCP connect sequence. Then I see it perform the bind request with the appropriate security package, in this case kerberos. After it obtains the endpoint for the working DCOM component, it connects to the DCOM endpoint authenticating again via Kerberos, and it successfully is able to instantiate and communicate. On the failing VirtualServer.Application component, I again see the bind request with kerberos go to the RPCC endpoing mapper successfully. However, when it then attempts to connect to the endpoint in the Virtual Server process, it fails to connect because it only attempts to authenticate with NTLM, which ultimately fails, because the webservice does not have access to the credentials to perform the NTLM hash. Why is it attempting to authenticate via NTLM? Additional Information: Both components run on the same server via DCOM Both components run as Local System on the server Both components are Win32 Service components Both components have the exact same launch/access/activation DCOM permissions Both Win32 Services are set to run as Local System The permission denied is not a permissions issue as far as I can tell, it is an authentication issue. Permission is denied because NTLM authentication is used with a NULL username instead of Kerberos Delegation Constrained delegation is setup on the server hosting the webservice. The server hosting the webservice is allowed to delegate to rpcss/dcom-server-name The server hosting the webservice is allowed to delegate to vssvc/dcom-server-name The dcom server is allowed to delegate to rpcss/webservice-server The SPN's registered on the dcom server include rpcss/dcom-server-name and vssvc/dcom-server-name as well as the HOST/dcom-server-name related SPNs The SPN's registered on the webservice-server include rpcss/webservice-server and the HOST/webservice-server related SPNs Anybody have any Ideas why the attempt to create a VirtualServer.Application object on a remote server is falling back to NTLM authentication causing it to fail and get permission denied? Additional information: When the following code is run in the context of the webservice, directly via a testing-only, just-developed COM component, it fails on the specified line with Access Denied. COSERVERINFO csi; csi.dwReserved1=0; csi.pwszName=L"terahnee.rivin.net"; csi.pAuthInfo=NULL; csi.dwReserved2=NULL; hr=CoGetClassObject(CLSID_VirtualServer, CLSCTX_ALL, &csi, IID_IClassFactory, (void **) &pClsFact); if(FAILED( hr )) goto error1; // Fails here with HRESULT_FROM_WIN32(ERROR_ACCESS_DENIED) hr=pClsFact->CreateInstance(NULL, IID_IUnknown, (void **) &pUnk); if(FAILED( hr )) goto error2; Ive also noticed that in the Wireshark Traces, i see the attempt to connect to the service process component only requests NTLMSSP authentication, it doesnt even attmept to use kerberos. This suggests that for some reason the webservice thinks it cant use kerberos...

    Read the article

  • Big Data: Size isn’t everything

    - by Simon Elliston Ball
    Big Data has a big problem; it’s the word “Big”. These days, a quick Google search will uncover terabytes of negative opinion about the futility of relying on huge volumes of data to produce magical, meaningful insight. There are also many clichéd but correct assertions about the difficulties of correlation versus causation, in massive data sets. In reading some of these pieces, I begin to understand how climatologists must feel when people complain ironically about “global warming” during snowfall. Big Data has a name problem. There is a lot more to it than size. Shape, Speed, and…err…Veracity are also key elements (now I understand why Gartner and the gang went with V’s instead of S’s). The need to handle data of different shapes (Variety) is not new. Data developers have always had to mold strange-shaped data into our reporting systems, integrating with semi-structured sources, and even straying into full-text searching. However, what we lacked was an easy way to add semi-structured and unstructured data to our arsenal. New “Big Data” tools such as MongoDB, and other NoSQL (Not Only SQL) databases, or a graph database like Neo4J, fill this gap. Still, to many, they simply introduce noise to the clean signal that is their sensibly normalized data structures. What about speed (Velocity)? It’s not just high frequency trading that generates data faster than a single system can handle. Many other applications need to make trade-offs that traditional databases won’t, in order to cope with high data insert speeds, or to extract quickly the required information from data streams. Unfortunately, many people equate Big Data with the Hadoop platform, whose batch driven queries and job processing queues have little to do with “velocity”. StreamInsight, Esper and Tibco BusinessEvents are examples of Big Data tools designed to handle high-velocity data streams. Again, the name doesn’t do the discipline of Big Data any favors. Ultimately, though, does analyzing fast moving data produce insights as useful as the ones we get through a more considered approach, enabled by traditional BI? Finally, we have Veracity and Value. In many ways, these additions to the classic Volume, Velocity and Variety trio acknowledge the criticism that without high-quality data and genuinely valuable outputs then data, big or otherwise, is worthless. As a discipline, Big Data has recognized this, and data quality and cleaning tools are starting to appear to support it. Rather than simply decrying the irrelevance of Volume, we need as a profession to focus how to improve Veracity and Value. Perhaps we should just declare the ‘Big’ silent, embrace these new data tools and help develop better practices for their use, just as we did the good old RDBMS? What does Big Data mean to you? Which V gives your business the most pain, or the most value? Do you see these new tools as a useful addition to the BI toolbox, or are they just enabling a dangerous trend to find ghosts in the noise?

    Read the article

  • Clustering for Mere Mortals (Pt2)

    - by Geoff N. Hiten
    Planning. I could stop there and let that be the entirety post #2 in this series.  Planning is the single most important element in building a cluster and the Laptop Demo Cluster is no exception.  One of the more awkward parts of actually creating a cluster is coordinating information between Windows Clustering and SQL Clustering.  The dialog boxes show up hours apart, but still have to have matching and consistent information. Excel seems to be a good tool for tracking these settings.  My workbook has four pages: Systems, Storage, Network, and Service Accounts.  The systems page looks like this:   Name Role Software Location East Physical Cluster Node 1 Windows Server 2008 R2 Enterprise Laptop VM West Physical Cluster Node 2 Windows Server 2008 R2 Enterprise Laptop VM North Physical Cluster Node 3 (Future Reserved) Windows Server 2008 R2 Enterprise Laptop VM MicroCluster Cluster Management Interface N/A Laptop VM SQL01 High-Performance High-Security Instance SQL Server 2008 Enterprise Edition x64 SP1 Laptop VM SQL02 High-Performance Standard-Security Instance SQL Server 2008 Enterprise Edition x64 SP1 Laptop VM SQL03 Standard-Performance High-Security Instance SQL Server 2008 Enterprise Edition x64 SP1 Laptop VM Note that everything that has a computer name is listed here, whether physical or virtual. Storage looks like this: Storage Name Instance Purpose Volume Path Size (GB) LUN ID Speed Quorum MicroCluster Cluster Quorum Quorum Q: 2     SQL01Anchor SQL01 Instance Anchor SQL01Anchor L: 2     SQL02Anchor SQL02 Instance Anchor SQL02Anchor M: 2     SQL01Data1 SQL01 SQL Data SQL01Data1 L:\MountPoints\SQL01Data1 2     SQL02Data1 SQL02 SQL Data SQL02Data1 M:\MountPoints\SQL02Data1       Starting at the left is the name used in the storage array.  It is important to rename resources at each level, whether it is Storage, LUN, Volume, or disk folder.  Otherwise, troubleshooting things gets complex and difficult.  You want to be able to glance at a resource at any level and see where it comes from and what it is connected to. Networking is the same way:   System Network VLAN  IP Subnet Mask Gateway DNS1 DNS2 East Public Cluster1 10.97.230.x(DHCP) 255.255.255.0 10.97.230.1 10.97.230.1 10.97.230.1 East Heartbeat Cluster2   255.255.255.0       West Public Cluster1 10.97.230.x(DHCP) 255.255.255.0 10.97.230.1 10.97.230.1 10.97.230.1 West Heartbeat Cluster2   255.255.255.0       North Public Cluster1 10.97.230.x(DHCP) 255.255.255.0 10.97.230.1 10.97.230.1 10.97.230.1 North Heartbeat Cluster2   255.255.255.0       SQL01 Public Cluster1 10.97.230.x(DHCP) 255.255.255.0       SQL02 Public Cluster1 10.97.230.x(DHCP) 255.255.255.0       One hallmark of a poorly planned and implemented cluster is a bunch of "Local Network Connection #n" entries in the network settings page.  That lets me know that somebody didn't care about the long-term supportabaility of the cluster.  This can be critically important with Hyper-V Clusters and their high NIC counts.  Final page:   Instance Service Name Account Password Domain OU SQL01 SQL Server SVCSQL01 Baseline22 MicroAD Service Accounts SQL01 SQL Agent SVCSQL01 Baseline22 MicroAD Service Accounts SQL02 SQL Server SVC_SQL02 Baseline22 MicroAD Service Accounts SQL02 SQL Agent SVC_SQL02 Baseline22 MicroAD Service Accounts SQL03 (Future) SQL Server SVC_SQL03 Baseline22 MicroAD Service Accounts SQL03 (Future) SQL Agent SVC_SQL03 Baseline22 MicroAD Service Accounts             Installation Account           administrator            Yes.  I write down the account information.  I secure the file via NTFS, but I don't want to fumble around looking for passwords when it comes time to rebuild a node. Always fill out the workbook COMPLETELY before installing anything.  The whole point is to have everything you need at your fingertips before you begin.  The install experience is so much better and more productive with this information in place.

    Read the article

  • Speed up SQL Server queries with PREFETCH

    - by Akshay Deep Lamba
    Problem The SAN data volume has a throughput capacity of 400MB/sec; however my query is still running slow and it is waiting on I/O (PAGEIOLATCH_SH). Windows Performance Monitor shows data volume speed of 4MB/sec. Where is the problem and how can I find the problem? Solution This is another summary of a great article published by R. Meyyappan at www.sqlworkshops.com.  In my opinion, this is the first article that highlights and explains with working examples how PREFETCH determines the performance of a Nested Loop join.  First of all, I just want to recall that Prefetch is a mechanism with which SQL Server can fire up many I/O requests in parallel for a Nested Loop join. When SQL Server executes a Nested Loop join, it may or may not enable Prefetch accordingly to the number of rows in the outer table. If the number of rows in the outer table is greater than 25 then SQL will enable and use Prefetch to speed up query performance, but it will not if it is less than 25 rows. In this section we are going to see different scenarios where prefetch is automatically enabled or disabled. These examples only use two tables RegionalOrder and Orders.  If you want to create the sample tables and sample data, please visit this site www.sqlworkshops.com. The breakdown of the data in the RegionalOrders table is shown below and the Orders table contains about 6 million rows. In this first example, I am creating a stored procedure against two tables and then execute the stored procedure.  Before running the stored proceudre, I am going to include the actual execution plan. --Example provided by www.sqlworkshops.com --Create procedure that pulls orders based on City --Do not forget to include the actual execution plan CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City END GO SET STATISTICS time ON GO --Example provided by www.sqlworkshops.com --Execute the procedure with parameter SmallCity1 EXEC RegionalOrdersProc 'SmallCity1' GO After running the stored procedure, if we right click on the Clustered Index Scan and click Properties we can see the Estimated Numbers of Rows is 24.    If we right click on Nested Loops and click Properties we do not see Prefetch, because it is disabled. This behavior was expected, because the number of rows containing the value ‘SmallCity1’ in the outer table is less than 25.   Now, if I run the same procedure with parameter ‘BigCity’ will Prefetch be enabled? --Example provided by www.sqlworkshops.com --Execute the procedure with parameter BigCity --We are using cached plan EXEC RegionalOrdersProc 'BigCity' GO As we can see from the below screenshot, prefetch is not enabled and the query takes around 7 seconds to execute. This is because the query used the cached plan from ‘SmallCity1’ that had prefetch disabled. Please note that even if we have 999 rows for ‘BigCity’ the Estimated Numbers of Rows is still 24.   Finally, let’s clear the procedure cache to trigger a new optimization and execute the procedure again. DBCC freeproccache GO EXEC RegionalOrdersProc 'BigCity' GO This time, our procedure runs under a second, Prefetch is enabled and the Estimated Number of Rows is 999.   The RegionalOrdersProc can be optimized by using the below example where we are using an optimizer hint. I have also shown some other hints that could be used as well. --Example provided by www.sqlworkshops.com --You can fix the issue by using any of the following --hints --Create procedure that pulls orders based on City DROP PROC RegionalOrdersProc GO CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City       --Hinting optimizer to use SmallCity2 for estimation       OPTION (optimize FOR (@City = 'SmallCity2'))       --Hinting optimizer to estimate for the currnet parameters       --option (recompile)       --Hinting optimize not to use histogram rather       --density for estimation (average of all 3 cities)       --option (optimize for (@City UNKNOWN))       --option (optimize for UNKNOWN) END GO Conclusion, this tip was mainly aimed at illustrating how Prefetch can speed up query execution and how the different number of rows can trigger this.

    Read the article

  • SQL SERVER – Quiz and Video – Introduction to Hierarchical Query using a Recursive CTE

    - by pinaldave
    This blog post is inspired from SQL Queries Joes 2 Pros: SQL Query Techniques For Microsoft SQL Server 2008 – SQL Exam Prep Series 70-433 – Volume 2.[Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to Hierarchical Query using a Recursive CTE – A Primer. In the article we discussed various basics terminology of the CTE. The article further covers following important concepts of common table expression. What is a Common Table Expression (CTE) Building a Recursive CTE Identify the Anchor and Recursive Query Add the Anchor and Recursive query to a CTE Add an expression to track hierarchical level Add a self-referencing INNER JOIN statement Above six are the most important concepts related to CTE and SQL Server.  There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced  concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1) You have an employee table with the following data. EmpID FirstName LastName MgrID 1 David Kennson 11 2 Eric Bender 11 3 Lisa Kendall 4 4 David Lonning 11 5 John Marshbank 4 6 James Newton 3 7 Sally Smith NULL You need to write a recursive CTE that shows the EmpID, FirstName, LastName, MgrID, and employee level. The CEO should be listed at Level 1. All people who work for the CEO will be listed at Level 2. All of the people who work for those people will be listed at Level 3. Which CTE code will achieve this result? WITH EmpList AS (SELECT Boss.EmpID, Boss.FName, Boss.LName, Boss.MgrID, 1 AS Lvl FROM Employee AS Boss WHERE Boss.MgrID IS NULL UNION ALL SELECT E.EmpID, E.FirstName, E.LastName, E.MgrID, EmpList.Lvl + 1 FROM Employee AS E INNER JOIN EmpList ON E.MgrID = EmpList.EmpID) SELECT * FROM EmpList WITH EmpListAS (SELECT EmpID, FirstName, LastName, MgrID, 1 as Lvl FROM Employee WHERE MgrID IS NULL UNION ALL SELECT EmpID, FirstName, LastName, MgrID, 2 as Lvl ) SELECT * FROM BossList WITH EmpList AS (SELECT EmpID, FirstName, LastName, MgrID, 1 as Lvl FROM Employee WHERE MgrID is NOT NULL UNION SELECT EmpID, FirstName, LastName, MgrID, BossList.Lvl + 1 FROM Employee INNER JOIN EmpList BossList ON Employee.MgrID = BossList.EmpID) SELECT * FROM EmpList 2) You have a table named Employee. The EmployeeID of each employee’s manager is in the ManagerID column. You need to write a recursive query that produces a list of employees and their manager. The query must also include the employee’s level in the hierarchy. You write the following code segment: WITH EmployeeList (EmployeeID, FullName, ManagerName, Level) AS ( –PICK ANSWER CODE HERE ) SELECT EmployeeID, FullName, ” AS [ManagerID], 1 AS [Level] FROM Employee WHERE ManagerID IS NULL UNION ALL SELECT emp.EmployeeID, emp.FullName mgr.FullName, 1 + 1 AS [Level] FROM Employee emp JOIN Employee mgr ON emp.ManagerID = mgr.EmployeeId SELECT EmployeeID, FullName, ” AS [ManagerID], 1 AS [Level] FROM Employee WHERE ManagerID IS NULL UNION ALL SELECT emp.EmployeeID, emp.FullName, mgr.FullName, mgr.Level + 1 FROM EmployeeList mgr JOIN Employee emp ON emp.ManagerID = mgr.EmployeeId Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 1 2) 2 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Big Data – Operational Databases Supporting Big Data – Key-Value Pair Databases and Document Databases – Day 13 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Relational Database and NoSQL database in the Big Data Story. In this article we will understand the role of Key-Value Pair Databases and Document Databases Supporting Big Data Story. Now we will see a few of the examples of the operational databases. Relational Databases (Yesterday’s post) NoSQL Databases (Yesterday’s post) Key-Value Pair Databases (This post) Document Databases (This post) Columnar Databases (Tomorrow’s post) Graph Databases (Tomorrow’s post) Spatial Databases (Tomorrow’s post) Key Value Pair Databases Key Value Pair Databases are also known as KVP databases. A key is a field name and attribute, an identifier. The content of that field is its value, the data that is being identified and stored. They have a very simple implementation of NoSQL database concepts. They do not have schema hence they are very flexible as well as scalable. The disadvantages of Key Value Pair (KVP) database are that they do not follow ACID (Atomicity, Consistency, Isolation, Durability) properties. Additionally, it will require data architects to plan for data placement, replication as well as high availability. In KVP databases the data is stored as strings. Here is a simple example of how Key Value Database will look like: Key Value Name Pinal Dave Color Blue Twitter @pinaldave Name Nupur Dave Movie The Hero As the number of users grow in Key Value Pair databases it starts getting difficult to manage the entire database. As there is no specific schema or rules associated with the database, there are chances that database grows exponentially as well. It is very crucial to select the right Key Value Pair Database which offers an additional set of tools to manage the data and provides finer control over various business aspects of the same. Riak Rick is one of the most popular Key Value Database. It is known for its scalability and performance in high volume and velocity database. Additionally, it implements a mechanism for collection key and values which further helps to build manageable system. We will further discuss Riak in future blog posts. Key Value Databases are a good choice for social media, communities, caching layers for connecting other databases. In simpler words, whenever we required flexibility of the data storage keeping scalability in mind – KVP databases are good options to consider. Document Database There are two different kinds of document databases. 1) Full document Content (web pages, word docs etc) and 2) Storing Document Components for storage. The second types of the document database we are talking about over here. They use Javascript Object Notation (JSON) and Binary JSON for the structure of the documents. JSON is very easy to understand language and it is very easy to write for applications. There are two major structures of JSON used for Document Database – 1) Name Value Pairs and 2) Ordered List. MongoDB and CouchDB are two of the most popular Open Source NonRelational Document Database. MongoDB MongoDB databases are called collections. Each collection is build of documents and each document is composed of fields. MongoDB collections can be indexed for optimal performance. MongoDB ecosystem is highly available, supports query services as well as MapReduce. It is often used in high volume content management system. CouchDB CouchDB databases are composed of documents which consists fields and attachments (known as description). It supports ACID properties. The main attraction points of CouchDB are that it will continue to operate even though network connectivity is sketchy. Due to this nature CouchDB prefers local data storage. Document Database is a good choice of the database when users have to generate dynamic reports from elements which are changing very frequently. A good example of document usages is in real time analytics in social networking or content management system. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >