Search Results

Search found 55028 results on 2202 pages for 'exchange system manager'.

Page 281/2202 | < Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >

  • How to do safely test Biztalk app by manipulating the Windows OS system time w/o breaking the Active Directory?

    - by melaos
    i have a biztalk - window service tied middleware application which talks to other system. recently we had a request to test for scenarios which relates to the date. as we have a lot of places in the application which uses the .net Datetime.Now value, we don't really want to go into the code level and change all these values. so we're looking at the simplest way to test which is to just change the OS time. but what we notice is that sometimes when we change the system date time, we will get account lock out due to Active Directory. So my question is what's a good and safe way that i can test for future dates, etc by changing the windows OS system date time but without causing any issues with the Active Directory. And where can i find out more about AD and how it issues token and what's the correlation with the system date time changes. Thanks! ~m

    Read the article

  • How to exchange the HDD of a MacBook Pro?

    - by Another Registered User
    I've bought an Solid State Drive (SSD) for my MacBook Pro, and now I need to exchange it somehow. Would this strategy work? 1) Create an backup with Time Machine (Snow Leopard) 2) Then replace the old HDD 3) Insert the new HDD 4) Install Snow Leopard (same version as previously used) 5) Open up Time Machine, and recover from the last backup I'm not sure about how to do the last part. Is that hard? What are the neccessary steps? Or is there a better way? Maybe I don't need to re-install Snow Leopard completely? Maybe the Install CD already offers an option to recover from Backup?

    Read the article

  • Call Webservice without adding a WebReference - with Complex Types

    - by ck
    I'm using the code at This Site to call a webservice dynamically. [SecurityPermissionAttribute(SecurityAction.Demand, Unrestricted = true)] public static object CallWebService(string webServiceAsmxUrl, string serviceName, string methodName, object[] args) { System.Net.WebClient client = new System.Net.WebClient(); //-Connect To the web service using (System.IO.Stream stream = client.OpenRead(webServiceAsmxUrl + "?wsdl")) { //--Now read the WSDL file describing a service. ServiceDescription description = ServiceDescription.Read(stream); ///// LOAD THE DOM ///////// //--Initialize a service description importer. ServiceDescriptionImporter importer = new ServiceDescriptionImporter(); importer.ProtocolName = "Soap12"; // Use SOAP 1.2. importer.AddServiceDescription(description, null, null); //--Generate a proxy client. importer.Style = ServiceDescriptionImportStyle.Client; //--Generate properties to represent primitive values. importer.CodeGenerationOptions = System.Xml.Serialization.CodeGenerationOptions.GenerateProperties; //--Initialize a Code-DOM tree into which we will import the service. CodeNamespace nmspace = new CodeNamespace(); CodeCompileUnit unit1 = new CodeCompileUnit(); unit1.Namespaces.Add(nmspace); //--Import the service into the Code-DOM tree. This creates proxy code //--that uses the service. ServiceDescriptionImportWarnings warning = importer.Import(nmspace, unit1); if (warning == 0) //--If zero then we are good to go { //--Generate the proxy code CodeDomProvider provider1 = CodeDomProvider.CreateProvider("CSharp"); //--Compile the assembly proxy with the appropriate references string[] assemblyReferences = new string[5] { "System.dll", "System.Web.Services.dll", "System.Web.dll", "System.Xml.dll", "System.Data.dll" }; CompilerParameters parms = new CompilerParameters(assemblyReferences); CompilerResults results = provider1.CompileAssemblyFromDom(parms, unit1); //-Check For Errors if (results.Errors.Count > 0) { StringBuilder sb = new StringBuilder(); foreach (CompilerError oops in results.Errors) { sb.AppendLine("========Compiler error============"); sb.AppendLine(oops.ErrorText); } throw new System.ApplicationException("Compile Error Occured calling webservice. " + sb.ToString()); } //--Finally, Invoke the web service method Type foundType = null; Type[] types = results.CompiledAssembly.GetTypes(); foreach (Type type in types) { if (type.BaseType == typeof(System.Web.Services.Protocols.SoapHttpClientProtocol)) { Console.WriteLine(type.ToString()); foundType = type; } } object wsvcClass = results.CompiledAssembly.CreateInstance(foundType.ToString()); MethodInfo mi = wsvcClass.GetType().GetMethod(methodName); return mi.Invoke(wsvcClass, args); } else { return null; } } } This works fine when I use built in types, but for my own classes, I get this: Event Type: Error Event Source: TDX Queue Service Event Category: None Event ID: 0 Date: 12/04/2010 Time: 12:12:38 User: N/A Computer: TDXRMISDEV01 Description: System.ArgumentException: Object of type 'TDXDataTypes.AgencyOutput' cannot be converted to type 'AgencyOutput'. Server stack trace: at System.RuntimeType.CheckValue(Object value, Binder binder, CultureInfo culture, BindingFlags invokeAttr) at System.Reflection.MethodBase.CheckArguments(Object[] parameters, Binder binder, BindingFlags invokeAttr, CultureInfo culture, Signature sig) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at System.Reflection.MethodBase.Invoke(Object obj, Object[] parameters) at TDXQueueEngine.GenericWebserviceProxy.CallWebService(String webServiceAsmxUrl, String serviceName, String methodName, Object[] args) in C:\CkAdmDev\TDXQueueEngine\TDXQueueEngine\TDXQueueEngine\GenericWebserviceProxy.cs:line 76 at TDXQueueEngine.TDXQueueWebserviceItem.Run() in C:\CkAdmDev\TDXQueueEngine\TDXQueueEngine\TDXQueueEngine\TDXQueueWebserviceItem.cs:line 99 at System.Runtime.Remoting.Messaging.StackBuilderSink._PrivateProcessMessage(IntPtr md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs) at System.Runtime.Remoting.Messaging.StackBuilderSink.PrivateProcessMessage(RuntimeMethodHandle md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs) at System.Runtime.Remoting.Messaging.StackBuilderSink.AsyncProcessMessage(IMessage msg, IMessageSink replySink) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.EndInvokeHelper(Message reqMsg, Boolean bProxyCase) at System.Runtime.Remoting.Proxies.RemotingProxy.Invoke(Object NotUsed, MessageData& msgData) at TDXQueueEngine.TDXQueue.RunProcess.EndInvoke(IAsyncResult result) at TDXQueueEngine.TDXQueue.processComplete(IAsyncResult ar) in C:\CkAdmDev\TDXQueueEngine\TDXQueueEngine\TDXQueueEngine\TDXQueue.cs:line 130 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. The classes reference the same assembly and the same version. Do I need to include my assembly as a reference when building the temporary assembly? If so, how? Thanks.

    Read the article

  • Nasty mono bug with F#

    - by Aurimas Anskaitis
    Hi, I have this monstrous f# 2.0 program My mono version is Mono JIT compiler version 2.9 (master/f593354 Sun Dec 26 03:15:55 EET 2010) Copyright (C) 2002-2010 Novell, Inc and Contributors. www.mono-project.com TLS: __thread SIGSEGV: altstack Notifications: epoll Architecture: x86 Disabled: none Misc: softdebug LLVM: supported, not enabled. GC: Included Boehm (with typed GC and Parallel Mark) //--------------------------------------------- module Main let rec gcd x y = if y = 0 then x else gcd y (x%y) let main = printfn "%i" (gcd 4 2) main //----------------------------------------------- And the problem is that output from running the program is as follows: Stacktrace: at (wrapper managed-to-native) System.Reflection.MonoMethodInfo.get_parameter_info (intptr,System.Reflection.MemberInfo) <0xffffffff at System.Reflection.MonoMethodInfo.GetParametersInfo (intptr,System.Reflection.MemberInfo) <0x00013 at System.Reflection.MonoCMethod.GetParameters () <0x00015 at System.Reflection.MonoCMethod.Invoke (object,System.Reflection.BindingFlags,System.Reflection.Binder,object[],System.Globalization.CultureInfo) <0x00035 at System.Reflection.MonoCMethod.Invoke (System.Reflection.BindingFlags,System.Reflection.Binder,object[],System.Globalization.CultureInfo) <0x00024 at System.Reflection.ConstructorInfo.Invoke (object[]) <0x0003f at System.Activator.CreateInstance (System.Type,bool) <0x0017c at System.Activator.CreateInstance (System.Type) <0x00012 at Microsoft.FSharp.Reflection.FSharpValue.MakeFunction (System.Type,Microsoft.FSharp.Core.FSharpFunc2<object, object>) <0x00145> at Microsoft.FSharp.Core.PrintfImpl.capture@529<b, c, d> (Microsoft.FSharp.Core.FSharpFunc2, Microsoft.FSharp.Core.FSharpFunc`2<char, Microsoft.FSharp.Core.Unit>, Microsoft.FSharp.Core.FSharpFunc`2,string,int,Microsoft.FSharp.Collections.FSharpList1<object>,System.Type,int) <0x00147> at Microsoft.FSharp.Core.PrintfImpl.gprintf<b, c, d, a> (Microsoft.FSharp.Core.FSharpFunc2, Microsoft.FSharp.Core.FSharpFunc`2<char, Microsoft.FSharp.Core.Unit>, Microsoft.FSharp.Core.FSharpFunc`2,Microsoft.FSharp.Core.PrintfFormat4<a, b, c, d>) <0x000dd> at Microsoft.FSharp.Core.PrintfModule.kprintf_imperative<a, b, c> (Microsoft.FSharp.Core.FSharpFunc2,b,Microsoft.FSharp.Core.FSharpFunc2<char, Microsoft.FSharp.Core.Unit>,Microsoft.FSharp.Core.PrintfFormat4) <0x00058 at Microsoft.FSharp.Core.PrintfModule.PrintFormatToTextWriterThen (Microsoft.FSharp.Core.FSharpFunc2<Microsoft.FSharp.Core.Unit, TResult>,System.IO.TextWriter,Microsoft.FSharp.Core.PrintfFormat4) <0x0004d at Microsoft.FSharp.Core.PrintfModule.PrintFormatLineToTextWriter (System.IO.TextWriter,Microsoft.FSharp.Core.PrintfFormat`4) <0x0004d at .$Main.main@ () <0x00042 Native stacktrace: mono() [0x80dc13b] mono() [0x811c65b] mono() [0x8059a11] [0x7af40c] mono() [0x8228214] mono() [0x8228214] mono() [0x8228214] mono() [0x8228214] mono() [0x8228214] mono() [0x8228214] mono() [0x8228214] mono() [0x8228214] mono() [0x8228282] mono() [0x822991e] mono() [0x822aa9d] mono(mono_array_new_specific+0xea) [0x813ba9a] mono() [0x81c63a1] mono() [0x8149ac8] [0xc04328] [0xc042e4] [0xc042be] [0xc0455e] [0xc0451d] [0xc044d8] [0xc0349d] [0xc0330b] [0xc02f9e] [0xbfe960] [0xbfe6c6] [0xbfe571] [0xbfe4de] [0xbfe44e] [0xbf9d2b] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] [0x9bf0724] Debug info from gdb: Could not attach to process. If your uid matches the uid of the target process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf ptrace: Operation not permitted. ================================================================= Got a SIGSEGV while executing native code. This usually indicates a fatal error in the mono runtime or one of the native libraries used by your application. Aborted It is a huge problem with mono or f#? By the way, the same function works when using pattern matching instead of "if".

    Read the article

  • boolean in java: what am I doing wrong?

    - by Cheesegraterr
    Hello, I am trying to make my boolean value work. I am new at programming java so I must be missing something simple. I am trying to make it so that if one of the tire pressures is below 35 or over 45 the system outputs "bad inflation" For class me must use a boolean which is what I tried. I cant figure out why this isnt working. No matter what I do the boolean is always true. Any tips? public class tirePressure { private static double getDoubleSystem1 () //Private routine to simply read a double in from the command line { String myInput1 = null; //Store the string that is read form the command line double numInput1 = 0; //Used to store the converted string into an double BufferedReader mySystem; //Buffer to store input mySystem = new BufferedReader (new InputStreamReader (System.in)); // creates a connection to system files or cmd try { myInput1 = mySystem.readLine (); //reads in data from console myInput1 = myInput1.trim (); //trim command cuts off unneccesary inputs } catch (IOException e) //checks for errors { System.out.println ("IOException: " + e); return -1; } numInput1 = Double.parseDouble (myInput1); //converts the string to an double return numInput1; //return double value to main program } static public void main (String[] args) { double TireFR; //double to store input from console double TireFL; double TireBR; double TireBL; boolean goodPressure; goodPressure = false; System.out.println ("Tire Pressure Checker"); System.out.println (" "); System.out.print ("Enter pressure of front left tire:"); TireFL = getDoubleSystem1 (); //read in an double from the user if (TireFL < 35 || TireFL > 45) { System.out.println ("Pressure out of range"); goodPressure = false; } System.out.print ("Enter pressure of front right tire:"); TireFR = getDoubleSystem1 (); //read in an double from the user if (TireFR < 35 || TireFR > 45) { System.out.println ("Pressure out of range"); goodPressure = false; } if (TireFL == TireFR) System.out.print (" "); else System.out.println ("Front tire pressures do not match"); System.out.println (" "); System.out.print ("Enter pressure of back left tire:"); TireBL = getDoubleSystem1 (); //read in an double from the user if (TireBL < 35 || TireBL > 45) { System.out.println ("Pressure out of range"); goodPressure = false; } System.out.print ("Enter pressure of back right tire:"); TireBR = getDoubleSystem1 (); //read in an double from the user if (TireBR < 35 || TireBR > 45) { System.out.println ("Pressure out of range"); goodPressure = false; } if (TireBL == TireBR) System.out.print (" "); else System.out.println ("Back tire pressures do not match"); if (goodPressure = true) System.out.println ("Inflation is OK."); else System.out.println ("Inflation is BAD."); System.out.println (goodPressure); } //mainmethod } // tirePressure Class

    Read the article

  • How can I use System.Web.Caching.Cache in a Console application?

    - by Ron Klein
    Context: .Net 3.5, C# I'd like to have caching mechanism in my Console application. Instead of re-inventing the wheel, I'd like to use System.Web.Caching.Cache (and that's a final decision, I can't use other caching framework, don't ask why). However, it looks like System.Web.Caching.Cache is supposed to run only in a valid HTTP context. My very simple snippet looks like this: using System; using System.Web.Caching; using System.Web; Cache c = new Cache(); try { c.Insert("a", 123); } catch (Exception ex) { Console.WriteLine("cannot insert to cache, exception:"); Console.WriteLine(ex); } and the result is: cannot insert to cache, exception: System.NullReferenceException: Object reference not set to an instance of an object. at System.Web.Caching.Cache.Insert(String key, Object value) at MyClass.RunSnippet() So obviously, I'm doing something wrong here. Any ideas? Update: +1 to most answers, getting the cache via static methods is the correct usage, namely HttpRuntime.Cache and HttpContext.Current.Cache. Thank you all!

    Read the article

  • There is an error in XML document... When calling to web service

    - by Sigurjón Guðbergsson
    I have created a web service and a function in it that should return a list of 11thousand records retreived from a pervasive database Here is my function in the web service. [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.ComponentModel.ToolboxItem(false)] public class BBI : System.Web.Services.WebService { [WebMethod] public List<myObject> getAll() { List<myObject> result = new List<myObject>(); PsqlConnection conn = new PsqlConnection("Host=soemthing;Port=something;Database=something;Encoding=IBM861"); conn.Open(); string strSql = "select 0, 1, 2, 3, 4, 5 from something"; PsqlCommand DBCmd = new PsqlCommand(strSql, conn); PsqlDataReader myDataReader; myDataReader = DBCmd.ExecuteReader(); while (myDataReader.Read()) { myObject b = new myObject(); b.0 = Convert.ToInt32(myDataReader[0].ToString()); b.1 = myDataReader[1].ToString(); b.2 = myDataReader[2].ToString(); b.3 = myDataReader[3].ToString(); b.4 = myDataReader[4].ToString(); b.5 = myDataReader[5].ToString(); result.Add(b); } conn.Close(); myDataReader.Close(); return result; } } Then i add web reference to this web service in my client program and call the reference BBI. Then i call to the getAll function and get the error : There is an error in XML document (1, 63432). public List<BBI.myObject> getAll() { BBI.BBI bbi = new BBI.BBI(); List<BBI.myObject> allBooks = bbi.getAll().OfType<BBI.myObject>().ToList(); return allBooks; } Here is the total exception detail System.InvalidOperationException was unhandled by user code Message=There is an error in XML document (1, 71897). Source=System.Xml StackTrace: at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle) at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) at BBI.BBI.getAllBooks() in c:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\Temporary ASP.NET Files\vefur\73db60db\a4ee31dd\App_WebReferences.jl1r8jv6.0.cs:line 252 at webServiceFuncions.getAllBooks() in c:\Documents and Settings\forritari\Desktop\Vefur - Nýr\BBI\trunk\Vefur\App_Code\webServiceFuncions.cs:line 59 InnerException: System.Xml.XmlException Message='', hexadecimal value 0x01, is an invalid character. Line 1, position 71897. Source=System.Xml LineNumber=1 LinePosition=71897 SourceUri="" StackTrace: at System.Xml.XmlTextReaderImpl.Throw(Exception e) at System.Xml.XmlTextReaderImpl.Throw(String res, String[] args) at System.Xml.XmlTextReaderImpl.Throw(Int32 pos, String res, String[] args) at System.Xml.XmlTextReaderImpl.ParseNumericCharRefInline(Int32 startPos, Boolean expand, StringBuilder internalSubsetBuilder, Int32& charCount, EntityType& entityType) at System.Xml.XmlTextReaderImpl.ParseCharRefInline(Int32 startPos, Int32& charCount, EntityType& entityType) at System.Xml.XmlTextReaderImpl.ParseText(Int32& startPos, Int32& endPos, Int32& outOrChars) at System.Xml.XmlTextReaderImpl.ParseText() at System.Xml.XmlTextReaderImpl.ParseElementContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlTextReader.Read() at System.Xml.XmlReader.ReadElementString() at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderBBI.Read2_Book(Boolean isNullable, Boolean checkType) at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderBBI.Read20_getAllBooksResponse() at Microsoft.Xml.Serialization.GeneratedAssembly.ArrayOfObjectSerializer35.Deserialize(XmlSerializationReader reader) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events) InnerException: The database records are containing all kind of strange symbols, for example ¤rmann Kr. Einarsson and Tv” ‘fint˜ri Can someone see what im doing wrong here?

    Read the article

  • x86 Assembly: Before Making a System Call on Linux Should You Save All Registers?

    - by mudge
    I have the below code that opens up a file, reads it into a buffer and then closes the file. The close file system call requires that the file descriptor number be in the ebx register. The ebx register gets the file descriptor number before the read system call is made. My question is should I save the ebx register on the stack or somewhere before I make the read system call, (could int 80h trash the ebx register?). And then restore the ebx register for the close system call? Or is the code I have below fine and safe? I have run the below code and it works, I'm just not sure if it is generally considered good assembly practice or not because I don't save the ebx register before the int 80h read call. ;; open up the input file mov eax,5 ; open file system call number mov ebx,[esp+8] ; null terminated string file name, first command line parameter mov ecx,0o ; access type: O_RDONLY int 80h ; file handle or negative error number put in eax test eax,eax js Error ; test sign flag (SF) for negative number which signals error ;; read in the full input file mov ebx,eax ; assign input file descripter mov eax,3 ; read system call number mov ecx,InputBuff ; buffer to read into mov edx,INPUT_BUFF_LEN ; total bytes to read int 80h test eax,eax js Error ; if eax is negative then error jz Error ; if no bytes were read then error add eax,InputBuff ; add size of input to the begining of InputBuff location mov [InputEnd],eax ; assign address of end of input ;; close the input file ;; file descripter is already in ebx mov eax,6 ; close file system call number int 80h

    Read the article

  • How to develop an english .com domain value rating algorithm?

    - by Tom
    I've been thinking about an algorithm that should rougly be able to guess the value of an english .com domain in most cases. For this to work I want to perform tests that consider the strengths and weaknesses of an english .com domain. A simple point based system is what I had in mind, where each domain property can be given a certain weight to factor it's importance in. I had these properties in mind: domain character length Eg. initially 20 points are added. If the domain has 4 or less characters, no points are substracted. For each extra character, one or more points are substracted on an exponential basis (the more characters, the higher the penalty). domain characters Eg. initially 20 points are added. If the domain is only alphabetic, no points are substracted. For each non-alhabetic character, X points are substracted (exponential increase again). domain name words Scans through a big offline english database, including non-formal speech, eg. words like "tweet" should be recognized. Question 1 : where can I get a modern list of english words for use in such application? Are these lists available for free? Are there lists like these with non-formal words? The more words are found per character, the more points are added. So, a domain with a lot of characters will still not get a lot of points. words hype-level I believe this is a tricky one, but this should be the cause to differentiate perfect but boring domains from perfect and interesting domains. For example, the following domain is probably not that valueable: www.peanutgalaxy.com The algorithm should identify that peanuts and galaxies are not very popular topics on the web. This is just an example. On the other side, a domain like www.shopdeals.com should ring a bell to the hype test, as shops and deals are quite popular on the web. My initial thought would be to see how often these keywords are references to on the web, preferably with some database. Question 2: is this logic flawed, or does this hype level test have merit? Question 3: are such "hype databases" available? Or is there anything else that could work offline? The problem with eg. a query to google is that it requires a lot of requests due to the many domains to be tested. domain name spelling mistakes Domains like "freemoneyz.com" etc. are generally (notice I am making a lot of assumptions in this post but that's necessary I believe) not valueable due to the spelling mistakes. Question 4: are there any offline APIs available to check for spelling mistakes, preferably in javascript or some database that I can use interact with myself. Or should a word list help here as well? use of consonants, vowels etc. A domain that is easy to pronounce (eg. Google) is usually much more valueable than one that is not (eg. Gkyld). Question 5: how does one test for such pronuncability? Do you check for consonants, vowels, etc.? What does a valueable domain have? Has there been any work in this field, where should I look? That is what I came up with, which leads me to my final two questions. Question 6: can you think of any more english .com domain strengths or weaknesses? Which? How would you implement these? Question 7: do you believe this idea has any merit or all, or am I too naive? Anything I should know, read or hear about? Suggestions/comments? Thanks!

    Read the article

  • How to implement the "System.out.println(ClassName::MethodName <then my message>)" of Eclipse in Netbeans?

    - by Sen
    I would like to know if there is the same feature as in eclipse to automatically generate and print the System.out.println(ClassName::MethodName <then my message>) functionality (which will print the class name and method name for debugging in the console) in Netbeans also. For example, in Eclipse Editor, Typing syst + Ctrl+ Space will auto generate a System.out.println(ClassName::MethodName ) type output in the console. Is such a method available in Netbeans? As of now, I have only two methods here in Netbeans: sout + Tab (System.out.println()) and soutv + Tab (System.out.println(prints the variable used just above the line)) automatically. Let me rephrase, instead of myMethod1, I want to get the enclosing method name. Eg. : public class X { public void myMethod1(int a) { System.out.println(X::myMethod1()); // This should be produced when I type the Code-Template abbreviation (example: syst) and press tab (or corresponding key). } } public class Y { public void myMethod2(int b) { System.out.println(Y::myMethod2()); // This should be produced when I type the Code-Template abbreviation (example: syst) and press tab (or corresponding key). } } Update: With the following code template: syst = System.out.println("${classVar editable="false" currClassName default="getClass()"}"); I am able to print the classname, but still no clue for the Method name.

    Read the article

  • ASP.NET MVC 3 (C#) Software Architecture

    - by ryanzec
    I am starting on a relatively large and ambitious ASP.NET MVC 3 project and just thinking about the best way to organize my code. The project is basically going to be a general management system that will be capable of supporting any type management system whether it be a blogging system, cms, reservation system, wikis, forums, project management system, etc…, each of them being just a separate 'module'. You can read more about it on my blog posted here : http://www.ryanzec.com/index.php/blog/details/8 (forgive me, the style of the site kinda sucks). For those who don't want to read the long blog post the basic idea is that the core system itself is nothing more than a users system with an admin interface to manage the users system. Then you just add on module as you need them and the module I will be creating is a simple blog post to test it out before I move on to the big module which is a project management system. Now I am just trying to think of the best way to structure this so that it is easy for users to add in there own modules but easy for me to update to core system without worrying about the user modifying the core code. I think the ideal way would be to have a number of core projects that user is specifically told not to modify otherwise the system may become unstable and future updates would not work. When the user wants to add in there own modules, they would just add in a new project (or multiple projects). The thing is I am not sure that it is even possible to use multiple projects all with their own controllers, razor view template, css, javascript, etc... in one web application. Ideally each module would have some of it own razor view templates, css, javascript, image files and also need access to some of the core razor view templates, css, javascript, image files which would is in a separate project. It is possible to have 1 web application run off of controllers, razor view templates, css, javascript, image files that are store in multiple projects? Is there a better was to structure this to allow the user to easily add in module with having to modify the core code?

    Read the article

  • Ninject.ActivationException: Error activating IMainLicense

    - by Stefan Karlsson
    Im don't know fully how Ninject works thats wye i ask this question here to figure out whats wrong. If i create a empty constructor in ClaimsSecurityService it gets hit. This is my error: Error activating IMainLicense No matching bindings are available, and the type is not self-bindable. Activation path: 3) Injection of dependency IMainLicense into parameter mainLicenses of constructor of type ClaimsSecurityService 2) Injection of dependency ISecurityService into parameter securityService of constructor of type AccountController 1) Request for AccountController Stack: Ninject.KernelBase.Resolve(IRequest request) +474 Ninject.Planning.Targets.Target`1.GetValue(Type service, IContext parent) +153 Ninject.Planning.Targets.Target`1.ResolveWithin(IContext parent) +747 Ninject.Activation.Providers.StandardProvider.GetValue(IContext context, ITarget target) +269 Ninject.Activation.Providers.<>c__DisplayClass4.<Create>b__2(ITarget target) +69 System.Linq.WhereSelectArrayIterator`2.MoveNext() +66 System.Linq.Buffer`1..ctor(IEnumerable`1 source) +216 System.Linq.Enumerable.ToArray(IEnumerable`1 source) +77 Ninject.Activation.Providers.StandardProvider.Create(IContext context) +847 Ninject.Activation.Context.ResolveInternal(Object scope) +218 Ninject.Activation.Context.Resolve() +277 Ninject.<>c__DisplayClass15.<Resolve>b__f(IBinding binding) +86 System.Linq.WhereSelectEnumerableIterator`2.MoveNext() +145 System.Linq.Enumerable.SingleOrDefault(IEnumerable`1 source) +4059897 Ninject.Planning.Targets.Target`1.GetValue(Type service, IContext parent) +169 Ninject.Planning.Targets.Target`1.ResolveWithin(IContext parent) +747 Ninject.Activation.Providers.StandardProvider.GetValue(IContext context, ITarget target) +269 Ninject.Activation.Providers.<>c__DisplayClass4.<Create>b__2(ITarget target) +69 System.Linq.WhereSelectArrayIterator`2.MoveNext() +66 System.Linq.Buffer`1..ctor(IEnumerable`1 source) +216 System.Linq.Enumerable.ToArray(IEnumerable`1 source) +77 Ninject.Activation.Providers.StandardProvider.Create(IContext context) +847 Ninject.Activation.Context.ResolveInternal(Object scope) +218 Ninject.Activation.Context.Resolve() +277 Ninject.<>c__DisplayClass15.<Resolve>b__f(IBinding binding) +86 System.Linq.WhereSelectEnumerableIterator`2.MoveNext() +145 System.Linq.Enumerable.SingleOrDefault(IEnumerable`1 source) +4059897 Ninject.Web.Mvc.NinjectDependencyResolver.GetService(Type serviceType) +145 System.Web.Mvc.DefaultControllerActivator.Create(RequestContext requestContext, Type controllerType) +87 [InvalidOperationException: An error occurred when trying to create a controller of type 'Successful.Struct.Web.Controllers.AccountController'. Make sure that the controller has a parameterless public constructor.] System.Web.Mvc.DefaultControllerActivator.Create(RequestContext requestContext, Type controllerType) +247 System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(RequestContext requestContext, Type controllerType) +438 System.Web.Mvc.DefaultControllerFactory.CreateController(RequestContext requestContext, String controllerName) +257 System.Web.Mvc.MvcHandler.ProcessRequestInit(HttpContextBase httpContext, IController& controller, IControllerFactory& factory) +326 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContextBase httpContext, AsyncCallback callback, Object state) +157 System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContext httpContext, AsyncCallback callback, Object state) +88 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, Object extraData) +50 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +301 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 Account controller: public class AccountController : Controller { private readonly ISecurityService _securityService; public AccountController(ISecurityService securityService) { _securityService = securityService; } // // GET: /Account/Login [AllowAnonymous] public ActionResult Login(string returnUrl) { ViewBag.ReturnUrl = returnUrl; return View(); } } NinjectWebCommon: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Http; using System.Web.Http.Dependencies; using Microsoft.Web.Infrastructure.DynamicModuleHelper; using Ninject; using Ninject.Extensions.Conventions; using Ninject.Parameters; using Ninject.Syntax; using Ninject.Web.Common; using Successful.Struct.Web; [assembly: WebActivator.PreApplicationStartMethod(typeof(NinjectWebCommon), "Start")] [assembly: WebActivator.ApplicationShutdownMethodAttribute(typeof(NinjectWebCommon), "Stop")] namespace Successful.Struct.Web { public static class NinjectWebCommon { private static readonly Bootstrapper Bootstrapper = new Bootstrapper(); /// <summary> /// Starts the application /// </summary> public static void Start() { DynamicModuleUtility.RegisterModule(typeof(OnePerRequestHttpModule)); DynamicModuleUtility.RegisterModule(typeof(NinjectHttpModule)); Bootstrapper.Initialize(CreateKernel); } /// <summary> /// Stops the application. /// </summary> public static void Stop() { Bootstrapper.ShutDown(); } /// <summary> /// Creates the kernel that will manage your application. /// </summary> /// <returns>The created kernel.</returns> private static IKernel CreateKernel() { var kernel = new StandardKernel(); kernel.Bind<Func<IKernel>>().ToMethod(ctx => () => new Bootstrapper().Kernel); kernel.Bind<IHttpModule>().To<HttpApplicationInitializationHttpModule>(); kernel.Load("Successful*.dll"); kernel.Bind(x => x.FromAssembliesMatching("Successful*.dll") .SelectAllClasses() .BindAllInterfaces() ); GlobalConfiguration.Configuration.DependencyResolver = new NinjectResolver(kernel); RegisterServices(kernel); return kernel; } /// <summary> /// Load your modules or register your services here! /// </summary> /// <param name="kernel">The kernel.</param> private static void RegisterServices(IKernel kernel) { } } public class NinjectResolver : NinjectScope, IDependencyResolver { private readonly IKernel _kernel; public NinjectResolver(IKernel kernel) : base(kernel) { _kernel = kernel; } public IDependencyScope BeginScope() { return new NinjectScope(_kernel.BeginBlock()); } } public class NinjectScope : IDependencyScope { protected IResolutionRoot ResolutionRoot; public NinjectScope(IResolutionRoot kernel) { ResolutionRoot = kernel; } public object GetService(Type serviceType) { var request = ResolutionRoot.CreateRequest(serviceType, null, new Parameter[0], true, true); return ResolutionRoot.Resolve(request).SingleOrDefault(); } public IEnumerable<object> GetServices(Type serviceType) { var request = ResolutionRoot.CreateRequest(serviceType, null, new Parameter[0], true, true); return ResolutionRoot.Resolve(request).ToList(); } public void Dispose() { var disposable = (IDisposable)ResolutionRoot; if (disposable != null) disposable.Dispose(); ResolutionRoot = null; } } } ClaimsSecurityService: public class ClaimsSecurityService : ISecurityService { private const string AscClaimsIdType = "http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider"; private const string SuccessfulStructWebNamespace = "Successful.Struct.Web"; private readonly IMainLicense _mainLicenses; private readonly ICompany _companys; private readonly IAuthTokenService _authService; [Inject] public IApplicationContext ApplicationContext { get; set; } [Inject] public ILogger<LocationService> Logger { get; set; } public ClaimsSecurityService(IMainLicense mainLicenses, ICompany companys, IAuthTokenService authService) { _mainLicenses = mainLicenses; _companys = companys; _authService = authService; } }

    Read the article

  • WCF Service returning 400 error: The body of the message cannot be read because it is empty

    - by Josh
    I have a WCF service that is causing a bit of a headache. I have tracing enabled, I have an object with a data contract being built and passed in, but I am seeing this error in the log: <TraceData> <DataItem> <TraceRecord xmlns="http://schemas.microsoft.com/2004/10/E2ETraceEvent/TraceRecord" Severity="Error"> <TraceIdentifier>http://msdn.microsoft.com/en-US/library/System.ServiceModel.Diagnostics.ThrowingException.aspx</TraceIdentifier> <Description>Throwing an exception.</Description> <AppDomain>efb0d0d7-1-129315381593520544</AppDomain> <Exception> <ExceptionType>System.ServiceModel.ProtocolException, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType> <Message>There is a problem with the XML that was received from the network. See inner exception for more details.</Message> <StackTrace> at System.ServiceModel.Channels.HttpRequestContext.CreateMessage() at System.ServiceModel.Channels.HttpChannelListener.HttpContextReceived(HttpRequestContext context, Action callback) at System.ServiceModel.Activation.HostedHttpTransportManager.HttpContextReceived(HostedHttpRequestAsyncResult result) at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.HandleRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.BeginRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.OnBeginRequest(Object state) at System.Runtime.IOThreadScheduler.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.Runtime.Fx.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) </StackTrace> <ExceptionString> System.ServiceModel.ProtocolException: There is a problem with the XML that was received from the network. See inner exception for more details. ---&amp;gt; System.Xml.XmlException: The body of the message cannot be read because it is empty. --- End of inner exception stack trace --- </ExceptionString> <InnerException> <ExceptionType>System.Xml.XmlException, System.Xml, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType> <Message>The body of the message cannot be read because it is empty.</Message> <StackTrace> at System.ServiceModel.Channels.HttpRequestContext.CreateMessage() at System.ServiceModel.Channels.HttpChannelListener.HttpContextReceived(HttpRequestContext context, Action callback) at System.ServiceModel.Activation.HostedHttpTransportManager.HttpContextReceived(HostedHttpRequestAsyncResult result) at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.HandleRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.BeginRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.OnBeginRequest(Object state) at System.Runtime.IOThreadScheduler.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.Runtime.Fx.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) </StackTrace> <ExceptionString>System.Xml.XmlException: The body of the message cannot be read because it is empty.</ExceptionString> </InnerException> </Exception> </TraceRecord> </DataItem> </TraceData> So, here is my service interface: [ServiceContract] public interface IRDCService { [OperationContract] Response<Customer> GetCustomer(CustomerRequest request); [OperationContract] Response<Customer> GetSiteCustomers(CustomerRequest request); } And here is my service instance public class RDCService : IRDCService { ICustomerService customerService; public RDCService() { //We have to locate the instance from structuremap manually because web services *REQUIRE* a default constructor customerService = ServiceLocator.Locate<ICustomerService>(); } public Response<Customer> GetCustomer(CustomerRequest request) { return customerService.GetCustomer(request); } public Response<Customer> GetSiteCustomers(CustomerRequest request) { return customerService.GetSiteCustomers(request); } } The configuration for the web service (server side) looks like this: <system.serviceModel> <diagnostics> <messageLogging logMalformedMessages="true" logMessagesAtServiceLevel="true" logMessagesAtTransportLevel="true" /> </diagnostics> <services> <service behaviorConfiguration="MySite.Web.Services.RDCServiceBehavior" name="MySite.Web.Services.RDCService"> <endpoint address="http://localhost:27433" binding="wsHttpBinding" contract="MySite.Common.Services.Web.IRDCService"> <identity> <dns value="localhost:27433" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="MySite.Web.Services.RDCServiceBehavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true"/> <dataContractSerializer maxItemsInObjectGraph="6553600" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> Here is what my request object looks like [DataContract] public class CustomerRequest : RequestBase { [DataMember] public int Id { get; set; } [DataMember] public int SiteId { get; set; } } And the RequestBase: [DataContract] public abstract class RequestBase : IRequest { #region IRequest Members [DataMember] public int PageSize { get; set; } [DataMember] public int PageIndex { get; set; } #endregion } And my IRequest interface public interface IRequest { int PageSize { get; set; } int PageIndex { get; set; } } And I have a wrapper class around my service calls. Here is the class. public class MyService : IMyService { IRDCService service; public MyService() { //service = new MySite.RDCService.RDCServiceClient(); EndpointAddress address = new EndpointAddress(APISettings.Default.ServiceUrl); BasicHttpBinding binding = new BasicHttpBinding(BasicHttpSecurityMode.None); binding.TransferMode = TransferMode.Streamed; binding.MaxBufferSize = 65536; binding.MaxReceivedMessageSize = 4194304; ChannelFactory<IRDCService> factory = new ChannelFactory<IRDCService>(binding, address); service = factory.CreateChannel(); } public Response<Customer> GetCustomer(CustomerRequest request) { return service.GetCustomer(request); } public Response<Customer> GetSiteCustomers(CustomerRequest request) { return service.GetSiteCustomers(request); } } and finally, the response object. [DataContract] public class Response<T> { [DataMember] public IEnumerable<T> Results { get; set; } [DataMember] public int TotalResults { get; set; } [DataMember] public int PageIndex { get; set; } [DataMember] public int PageSize { get; set; } [DataMember] public RulesException Exception { get; set; } } So, when I build my CustomerRequest object and pass it in, for some reason it's hitting the server as an empty request. Any ideas why? I've tried upping the object graph and the message size. When I debug it stops in the wrapper class with the 400 error. I'm not sure if there is a serialization error, but considering the object contract is 4 integer properties I can't imagine it causing an issue.

    Read the article

  • Tips on how to refactor this unwieldy upvote/downvote code

    - by bob_cobb
    Basically this code is for an upvote/downvote system and I'm basically Incrementing the count by 1 when voting up Decrementing the count by 1 when voting down If the number of downvotes upvotes, we'll assume it's a negative score, so the count stays 0 Reverting the count back to what it originally was when clicking upvote twice or downvote twice Never go below 0 (by showing negative numbers); Basically it's the same scoring scheme reddit uses, and I tried to get some ideas from the source which was minified and kind of hard to grok: a.fn.vote = function(b, c, e, j) { if (reddit.logged && a(this).hasClass("arrow")) { var k = a(this).hasClass("up") ? 1 : a(this).hasClass("down") ? -1 : 0, v = a(this).all_things_by_id(), p = v.children().not(".child").find(".arrow"), q = k == 1 ? "up" : "upmod"; p.filter("." + q).removeClass(q).addClass(k == 1 ? "upmod" : "up"); q = k == -1 ? "down" : "downmod"; p.filter("." + q).removeClass(q).addClass(k == -1 ? "downmod" : "down"); reddit.logged && (v.each(function() { var b = a(this).find(".entry:first, .midcol:first"); k > 0 ? b.addClass("likes").removeClass("dislikes unvoted") : k < 0 ? b.addClass("dislikes").removeClass("likes unvoted") : b.addClass("unvoted").removeClass("likes dislikes") }), a.defined(j) || (j = v.filter(":first").thing_id(), b += e ? "" : "-" + j, a.request("vote", {id: j,dir: k,vh: b}))); c && c(v, k) } }; I'm trying to look for a pattern, but there are a bunch of edge cases that I've been adding in, and it's still a little off. My code (and fiddle): $(function() { var down = $('.vote-down'); var up = $('.vote-up'); var direction = up.add(down); var largeCount = $('#js-large-count'); var totalUp = $('#js-total-up'); var totalDown = $('#js-total-down'); var totalUpCount = parseInt(totalUp.text(), 10); var totalDownCount = parseInt(totalDown.text(), 10); var castVote = function(submissionId, voteType) { /* var postURL = '/vote'; $.post(postURL, { submissionId : submissionId, voteType : voteType } , function (data){ if (data.response === 'success') { totalDown.text(data.downvotes); totalUp.text(data.upvotes); } }, 'json'); */ alert('voted!'); }; $(direction).on('click', direction, function () { // The submission ID var $that = $(this), submissionId = $that.attr('id'), voteType = $that.attr('dir'), // what direction was voted? [up or down] isDown = $that.hasClass('down'), isUp = $that.hasClass('up'), curVotes = parseInt($that.parent().find('div.count').text(), 10); // current vote castVote(submissionId, voteType); // Voted up on submission if (voteType === 'up') { var alreadyVotedUp = $that.hasClass('likes'), upCount = $that.next('div.count'), dislikes = $that.nextAll('a').first(); // next anchor attr if (alreadyVotedUp) { // Clicked the up arrow and previously voted up $that.toggleClass('likes up'); if (totalUpCount > totalDownCount) { upCount.text(curVotes - 1); largeCount.text(curVotes - 1); } else { upCount.text(0); largeCount.text(0); } upCount.css('color', '#555').hide().fadeIn(); largeCount.hide().fadeIn(); } else if (dislikes.hasClass('dislikes')) { // Voted down now are voting up if (totalDownCount > totalUpCount) { upCount.text(0); largeCount.text(0); } else if (totalUpCount > totalDownCount) { console.log(totalDownCount); console.log(totalUpCount); if (totalDownCount === 0) { upCount.text(curVotes + 1); largeCount.text(curVotes + 1); } else { upCount.text(curVotes + 2); largeCount.text(curVotes + 2); } } else { upCount.text(curVotes + 1); largeCount.text(curVotes + 1); } dislikes.toggleClass('down dislikes'); upCount.css('color', '#296394').hide().fadeIn(200); largeCount.hide().fadeIn(); } else { if (totalDownCount > totalUpCount) { upCount.text(0); largeCount.text(0); } else { // They clicked the up arrow and haven't voted up yet upCount.text(curVotes + 1); largeCount.text(curVotes + 1).hide().fadeIn(200); upCount.css('color', '#296394').hide().fadeIn(200); } } // Change arrow to dark blue if (isUp) { $that.toggleClass('up likes'); } } // Voted down on submission if (voteType === 'down') { var alreadyVotedDown = $that.hasClass('dislikes'), downCount = $that.prev('div.count'); // Get previous anchor attribute var likes = $that.prevAll('a').first(); if (alreadyVotedDown) { if (curVotes === 0) { if (totalDownCount > totalUp) { downCount.text(curVotes); largeCount.text(curVotes); } else { if (totalUpCount < totalDownCount || totalUpCount == totalDownCount) { downCount.text(0); largeCount.text(0); } else { downCount.text((totalUpCount - totalUpCount) + 1); largeCount.text((totalUpCount - totalUpCount) + 1); } } } else { downCount.text(curVotes + 1); largeCount.text(curVotes + 1); } $that.toggleClass('down dislikes'); downCount.css('color', '#555').hide().fadeIn(200); largeCount.hide().fadeIn(); } else if (likes.hasClass('likes')) { // They voted up from 0, and now are voting down if (curVotes <= 1) { downCount.text(0); largeCount.text(0); } else { // They voted up, now they are voting down (from a number > 0) downCount.text(curVotes - 2); largeCount.text(curVotes - 2); } likes.toggleClass('up likes'); downCount.css('color', '#ba2a2a').hide().fadeIn(200); largeCount.hide().fadeIn(200); } else { if (curVotes > 0) { downCount.text(curVotes - 1); largeCount.text(curVotes - 1); } else { downCount.text(curVotes); largeCount.text(curVotes); } downCount.css('color', '#ba2a2a').hide().fadeIn(200); largeCount.hide().fadeIn(200); } // Change the arrow to red if (isDown) { $that.toggleClass('down dislikes'); } } return false; }); });? Pretty convoluted, right? Is there a way to do something similar but in about 1/3 of the code I've written? After attempting to re-write it, I find myself doing the same thing so I just gave up halfway through and decided to ask for some help (fiddle of most recent).

    Read the article

  • MS Dynamics CRM trapping .NET error before I can handle it

    - by clifgriffin
    This is a fun one. I have written a custom search page that provides faster, more user friendly searches than the default Contacts view and also allows searching of Leads and Contacts simultaneously. It uses GridViews bound to SqlDataSources that query filtered views. I'm sure someone will complain that I'm not using the web services for this, but this is just the design decision we made. These GridViews live in UpdatePanels to enable very slick AJAX updates upon search. It's all working great. Nearly ready to be deployed, except for one thing: Some long running searches are triggering an uncatchable SQL timeout exception. [SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.] at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlDataReader.ConsumeMetaData() at System.Data.SqlClient.SqlDataReader.get_MetaData() at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method) at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method) at System.Data.SqlClient.SqlCommand.ExecuteDbDataReader(CommandBehavior behavior) at System.Data.Common.DbCommand.System.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, String srcTable) at System.Web.UI.WebControls.SqlDataSourceView.ExecuteSelect(DataSourceSelectArguments arguments) at System.Web.UI.DataSourceView.Select(DataSourceSelectArguments arguments, DataSourceViewSelectCallback callback) at System.Web.UI.WebControls.DataBoundControl.PerformSelect() at System.Web.UI.WebControls.BaseDataBoundControl.DataBind() at System.Web.UI.WebControls.GridView.DataBind() at System.Web.UI.WebControls.BaseDataBoundControl.EnsureDataBound() at System.Web.UI.WebControls.CompositeDataBoundControl.CreateChildControls() at System.Web.UI.Control.EnsureChildControls() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) I found that CRM is doing a server.transfer to capture this error because my UpdatePanels started throwing JavaSript errors when this error would occur. I was only able to get the full error message by using the JavaScript debugger in IE. Having found this error, I thought the solution would be simple. I just needed to wrap my databind calls in try/catch blocks to capture any errors. Unfortunately it seems CRM's IIS configuration has the magic ability to capture this error before it ever gets back to my code. Using the debugger I never see it. It never gets to my catch blocks, but it's clearly happening in the SQL Data Source which is clearly (by the stack trace) being triggered by my GridView bind. Any ideas on this? It's driving me crazy. Code Behind (with some irrelevant functions omitted): protected void Page_Load(object sender, EventArgs e) { //Initialize some stuff this.bannerOracle = new OdbcConnection(ConfigurationManager.ConnectionStrings["OracleConnectionString"].ConnectionString); //Prospect default HideProspects(); HideProspectAddressColumn(); //Contacts default HideContactAddressColumn(); //Default error messages gvContacts.EmptyDataText = "Sad day. Your search returned no contacts."; gvProspects.EmptyDataText = "Sad day. Your search returned no prospects."; //New search try { SearchContact(null, -1); } catch { gvContacts.EmptyDataText = "Oops! An error occured. This may have been a timeout. Please try your search again."; gvContacts.DataSource = null; gvContacts.DataBind(); } } protected void txtSearchString_TextChanged(object sender, EventArgs e) { if(!String.IsNullOrEmpty(txtSearchString.Text)) { try { SearchContact(txtSearchString.Text, Convert.ToInt16(lstSearchType.SelectedValue)); } catch { gvContacts.EmptyDataText = "Oops! An error occured. This may have been a timeout. Please try your search again."; gvContacts.DataSource = null; gvContacts.DataBind(); } if (chkProspects.Checked == true) { try { SearchProspect(txtSearchString.Text, Convert.ToInt16(lstSearchType.SelectedValue)); } catch { gvProspects.EmptyDataText = "Oops! An error occured. This may have been a timeout. Please try your search again."; gvProspects.DataSource = null; gvProspects.DataBind(); } finally { ShowProspects(); } } else { HideProspects(); } } } protected void SearchContact(string search, int type) { SqlCRM_Contact.ConnectionString = ConfigurationManager.ConnectionStrings["MSSQLConnectionString"].ConnectionString; gvContacts.DataSourceID = "SqlCRM_Contact"; string strQuery = ""; string baseQuery = @"SELECT filteredcontact.contactid, filteredcontact.new_libertyid, filteredcontact.fullname, 'none' AS line1, filteredcontact.emailaddress1, filteredcontact.telephone1, filteredcontact.birthdateutc AS birthdate, filteredcontact.gendercodename FROM filteredcontact "; switch(type) { case LASTFIRST: strQuery = baseQuery + "WHERE fullname LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case LAST: strQuery = baseQuery + "WHERE lastname LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case FIRST: strQuery = baseQuery + "WHERE firstname LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case LIBERTYID: strQuery = baseQuery + "WHERE new_libertyid LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case EMAIL: strQuery = baseQuery + "WHERE emailaddress1 LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case TELEPHONE: strQuery = baseQuery + "WHERE telephone1 LIKE @value AND filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case BIRTHDAY: strQuery = baseQuery + "WHERE filteredcontact.birthdateutc BETWEEN @dateStart AND @dateEnd AND filteredcontact.statecode = 0"; try { DateTime temp = DateTime.Parse(search); if (temp.Year < 1753 || temp.Year > 9999) { search = string.Empty; } else { search = temp.ToString("yyyy-MM-dd"); } } catch { search = string.Empty; } SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("dateStart", DbType.String, search.Trim() + " 00:00:00.000"); SqlCRM_Contact.SelectParameters.Add("dateEnd", DbType.String, search.Trim() + " 23:59:59.999"); break; case SSN: //Do something break; case ADDRESS: strQuery = @"SELECT contactid, new_libertyid, fullname, line1, emailaddress1, telephone1, birthdate, gendercodename FROM (SELECT FC.contactid, FC.new_libertyid, FC.fullname, FA.line1, FC.emailaddress1, FC.telephone1, FC.birthdateutc AS birthdate, FC.gendercodename, ROW_NUMBER() OVER(PARTITION BY FC.contactid ORDER BY FC.contactid DESC) AS rn FROM filteredcontact FC INNER JOIN FilteredCustomerAddress FA ON FC.contactid = FA.parentid WHERE FA.line1 LIKE @value AND FA.addressnumber <> 1 AND FC.statecode = 0 ) AS RESULTS WHERE rn = 1"; SqlCRM_Contact.SelectCommand = strQuery; SqlCRM_Contact.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); ShowContactAddressColumn(); break; default: strQuery = @"SELECT TOP 500 filteredcontact.contactid, filteredcontact.new_libertyid, filteredcontact.fullname, 'none' AS line1, filteredcontact.emailaddress1, filteredcontact.telephone1, filteredcontact.birthdateutc AS birthdate, filteredcontact.gendercodename FROM filteredcontact WHERE filteredcontact.statecode = 0"; SqlCRM_Contact.SelectCommand = strQuery; break; } if (type != ADDRESS) { HideContactAddressColumn(); } gvContacts.PageIndex = 0; //try //{ // SqlCRM_Contact.DataBind(); //} //catch //{ // SqlCRM_Contact.DataBind(); //} gvContacts.DataBind(); } protected void SearchProspect(string search, int type) { SqlCRM_Prospect.ConnectionString = ConfigurationManager.ConnectionStrings["MSSQLConnectionString"].ConnectionString; gvProspects.DataSourceID = "SqlCRM_Prospect"; string strQuery = ""; string baseQuery = @"SELECT filteredlead.leadid, filteredlead.fullname, 'none' AS address1_line1, filteredlead.emailaddress1, filteredlead.telephone1, filteredlead.lu_dateofbirthutc AS lu_dateofbirth, filteredlead.lu_gendername FROM filteredlead "; switch (type) { case LASTFIRST: strQuery = baseQuery + "WHERE fullname LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case LAST: strQuery = baseQuery + "WHERE lastname LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case FIRST: strQuery = baseQuery + "WHERE firstname LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case LIBERTYID: strQuery = baseQuery + "WHERE new_libertyid LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case EMAIL: strQuery = baseQuery + "WHERE emailaddress1 LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case TELEPHONE: strQuery = baseQuery + "WHERE telephone1 LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); break; case BIRTHDAY: strQuery = baseQuery + "WHERE filteredlead.lu_dateofbirth BETWEEN @dateStart AND @dateEnd AND filteredlead.statecode = 0"; try { DateTime temp = DateTime.Parse(search); if (temp.Year < 1753 || temp.Year > 9999) { search = string.Empty; } else { search = temp.ToString("yyyy-MM-dd"); } } catch { search = string.Empty; } SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("dateStart", DbType.String, search.Trim() + " 00:00:00.000"); SqlCRM_Prospect.SelectParameters.Add("dateEnd", DbType.String, search.Trim() + " 23:59:59.999"); break; case SSN: //Do nothing break; case ADDRESS: strQuery = @"SELECT filteredlead.leadid, filteredlead.fullname, filteredlead.address1_line1, filteredlead.emailaddress1, filteredlead.telephone1, filteredlead.lu_dateofbirthutc AS lu_dateofbirth, filteredlead.lu_gendername FROM filteredlead WHERE address1_line1 LIKE @value AND filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; SqlCRM_Prospect.SelectParameters.Add("value", DbType.String, search.Trim() + "%"); ShowProspectAddressColumn(); break; default: strQuery = @"SELECT TOP 500 filteredlead.leadid, filteredlead.fullname, 'none' AS address1_line1 filteredlead.emailaddress1, filteredlead.telephone1, filteredlead.lu_dateofbirthutc AS lu_dateofbirth, filteredlead.lu_gendername FROM filteredlead WHERE filteredlead.statecode = 0"; SqlCRM_Prospect.SelectCommand = strQuery; break; } if (type != ADDRESS) { HideProspectAddressColumn(); } gvProspects.PageIndex = 0; //try //{ // SqlCRM_Prospect.DataBind(); //} //catch (Exception ex) //{ // SqlCRM_Prospect.DataBind(); //} gvProspects.DataBind(); }

    Read the article

  • Our embedded linux system won't recognize a USB Device if it is plugged in before powerup. Suggestions?

    - by Blaine
    We are developing on a small embedded device. This device us a gumstix overo board running OpenEmbedded linux. We have our development almost completely done, and have run into the strangest of bugs that we can't figure out. We have a USB Device (Spectrophotometer) that has a USB2.0 Connection and an external power supply for the light source. Typical behavior is that you plug in the power supply, then the USB connection to a host. When the usb connection is detected by the device, the device boots up and enables the light source and fan. The device is then able to be used by the host system. The problem is that if the device is plugged into the Gumstix before we turn on the Gumstix, the USB Device apparently is not probed by the system (and hence does not turn on). Under a normal situation, when the connection is initialized by plugging in the usb cable, the spectro turns itself on and becomes available to the system (this can be seen via "lsusb" typically). Neither of these things are happening. There is no device detected via "lsusb" and no dmesg errors of any kind that we can see. It is as if the device is not plugged in. The device does show up and work fine if we unplug the USB cable and plug it back in once the system is booted up. It turns on and shows up on the usb bus, and we can access it with our driver. On any other desktop or laptop, it does not matter if the host system is on or off when we plug in the spectrometer. This behavior is what I would consider to be "normal" - that the usb system is probed and initialized at boot time, and the usb devices come online. In other words, our system is fully functional as long as we plug in the usb device after the system is booted up. Unfortunately this isn't possible in our final product - everything comes on at once. Additional Info: 1) We have tried a flash drive attached to the system when the system is turned off. Booting up the system brings the flash drive online, as expected 2) There are no messages regarding the spectro or usb device (using dmesg). "lsusb" only lists the USB hubs / controllers. It is literally as if the device is not present and not plugged in. 3) We have tried a brand new image from gumstix and an older image from last year. Both images have this problem. This problem exists on all 3 gumstix devices we use. Does anyone have any suggestions? From what I can tell it isn't really possible to do a complete "reboot" of the usb system that is a complete emulation of "unplugging" and "replugging" a usb device. I feel like what is happening is that there is no initial probe on the usb bus that would trigger the usb handshaking, but this is somehow specific to the spectro. This seems to be a kernel issue or at least an issue in how the kernel is initializing the usb subsystem. I'm not really sure though. I have tried the gumstix mailing list, but there doesn't seem to be anyone who has seen this issue before. Any advice or suggestions on where to start looking would be fantastic. Thank you! Blaine output etc. $ uname -a Linux overo 2.6.33 #1 Tue Apr 27 08:35:38 PDT 2010 armv7l GNU/Linux When the system is up and running and spectro is plugged in (working as intended), this is lsusb: Bus 001 Device 116: ID 2457:1022 Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x2457 idProduct 0x1022 bcdDevice 0.02 iManufacturer 1 USB4000 1.01.11 iProduct 2 Ocean Optics USB4000 iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 46 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 400mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x86 EP 6 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0000 (Bus Powered) dmesg output: usb usb1: usb auto-resume hub 1-0:1.0: hub_resume usb usb2: usb auto-resume ehci-omap ehci-omap.0: resume root hub hub 1-0:1.0: state 7 ports 1 chg 0000 evt 0000 hub 2-0:1.0: hub_resume hub 2-0:1.0: state 7 ports 3 chg 0000 evt 0000 hub 1-0:1.0: hub_suspend usb usb1: bus auto-suspend hub 2-0:1.0: hub_suspend usb usb2: bus auto-suspend ehci-omap ehci-omap.0: suspend root hub usb usb2: usb resume ehci-omap ehci-omap.0: resume root hub hub 2-0:1.0: hub_resume ehci-omap ehci-omap.0: GetStatus port 2 status 001803 POWER sig=j CSC CONNECT hub 2-0:1.0: port 2: status 0501 change 0001 hub 2-0:1.0: state 7 ports 3 chg 0004 evt 0000 hub 2-0:1.0: port 2, status 0501, change 0000, 480 Mb/s ehci-omap ehci-omap.0: port 2 high speed ehci-omap ehci-omap.0: GetStatus port 2 status 001005 POWER sig=se0 PE CONNECT usb 2-2: new high speed USB device using ehci-omap and address 2 ehci-omap ehci-omap.0: port 2 high speed ehci-omap ehci-omap.0: GetStatus port 2 status 001005 POWER sig=se0 PE CONNECT usb 2-2: default language 0x0409 usb 2-2: udev 2, busnum 2, minor = 129 usb 2-2: New USB device found, idVendor=2457, idProduct=1022 usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 usb 2-2: Product: Ocean Optics USB4000 usb 2-2: Manufacturer: USB4000 1.01.11 usb 2-2: uevent usb 2-2: usb_probe_device usb 2-2: configuration #1 chosen from 1 choice usb 2-2: uevent usb 2-2: adding 2-2:1.0 (config #1, interface 0) usb 2-2:1.0: uevent drivers/usb/core/inode.c: creating file '002' dmesg has nothing to say, and lusb simply lists nothing else but the two default usb controllers / hubs if we plug the device in before the system is turned on.

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • Do You Know How OUM defines the four, basic types of business system testing performed on a project? Why not test your knowledge?

    - by user713452
    Testing is perhaps the most important process in the Oracle® Unified Method (OUM). That makes it all the more important for practitioners to have a common understanding of the various types of functional testing referenced in the method, and to use the proper terminology when communicating with each other about testing activities. OUM identifies four basic types of functional testing, which is sometimes referred to as business system testing.  The basic functional testing types referenced by OUM include: Unit Testing Integration Testing System Testing, and  Systems Integration Testing See if you can match the following definitions with the appropriate type above? A.  This type of functional testing is focused on verifying that interfaces/integration between the system being implemented (i.e. System under Discussion (SuD)) and external systems functions as expected. B.     This type of functional testing is performed for custom software components only, is typically performed by the developer of the custom software, and is focused on verifying that the several custom components developed to satisfy a given requirement (e.g. screen, program, report, etc.) interact with one another as designed. C.  This type of functional testing is focused on verifying that the functionality within the system being implemented (i.e. System under Discussion (SuD)), functions as expected.  This includes out-of-the -box functionality delivered with Commercial Off-The-Shelf (COTS) applications, as well as, any custom components developed to address gaps in functionality.  D.  This type of functional testing is performed for custom software components only, is typically performed by the developer of the custom software, and is focused on verifying that the individual custom components developed to satisfy a given requirement  (e.g. screen, program, report, etc.) functions as designed.   Check your answers below: (D) (B) (C) (A) If you matched all of the functional testing types to their definitions correctly, then congratulations!  If not, you can find more information in the Testing Process Overview and Testing Task Overviews in the OUM Method Pack.

    Read the article

  • ASP.NET Hosting :: ASP.NET File Upload Control

    - by mbridge
    The asp.net FileUpload control allows a user to browse and upload files to the web server. From developers perspective, it is as simple as dragging and dropping the FileUpload control to the aspx page. An extra control, like a Button control, or some other control is needed, to actually save the file. <asp:FileUploadID="FileUpload1"runat="server"/> <asp:ButtonID="B1"runat="server"Text="Save"OnClick="B1_Click"/> By default, the FileUpload control allows a maximum of 4MB file to be uploaded and the execution timeout is 110 seconds. These properties can be changed from within the web.config file’s httpRuntime section. The maxRequestLength property determines the maximum file size that can be uploaded. The executionTimeout property determines the maximum time for execution. <httpRuntimemaxRequestLength="8192"executionTimeout="220"/> From code behind, the mime type, size of the file, file name and the extension of the file can be obtained. The maximum file size that can be uploaded can be obtained and modified using the System.Web.Configuration.HttpRuntimeSection class. Files can be alternatively saved using the System.IO.HttpFileCollection class. This collection class can be populated using the Request.Files property. The collection contains HttpPostedFile class which contains a reference to the class. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using System.IO; using System.Configuration; using System.Web.Configuration;   namespace WebApplication1 {     public partial class WebControls : System.Web.UI.Page     {         protected void Page_Load(object sender, EventArgs e)         {         }           //Using FileUpload control to upload and save files         protected void B1_Click(object sender, EventArgs e)         {             if (FileUpload1.HasFile && FileUpload1.PostedFile.ContentLength > 0)             {                 //mime type of the uploaded file                 string mimeType = FileUpload1.PostedFile.ContentType;                   //size of the uploaded file                 int size = FileUpload1.PostedFile.ContentLength; // bytes                   //extension of the uploaded file                 string extension = System.IO.Path.GetExtension(FileUpload1.FileName);                                  //save file                 string path = Server.MapPath("path");                                 FileUpload1.SaveAs(path + FileUpload1.FileName);                              }             //maximum file size allowed             HttpRuntimeSection rt = new HttpRuntimeSection();             rt.MaxRequestLength = rt.MaxRequestLength * 2;             int length = rt.MaxRequestLength;                     //execution timeout             TimeSpan ts = rt.ExecutionTimeout;             double secomds = ts.TotalSeconds;           }           //Using Request.Files to save files         private void AltSaveFile()         {             HttpFileCollection coll = Request.Files;             for (int i = 0; i < coll.Count; i++)             {                 HttpPostedFile file = coll[i];                   if (file.ContentLength > 0)                     ;//do something             }         }     } }

    Read the article

  • Windows not recognizing its system drive after installing Ubuntu 11.10 alongside with windows 7. What to do?

    - by user53322
    After installing Ubuntu 11.10, it boots perfectly. But when I select Windows 7 from the GRUB menu, it restarts after showing the boot logo. I tried to repair the boot loader but the process failed. Then I decide to repair with system recovery disc. There I realize the system is unable to find any existing system. Then I boot into ubuntu and here I can see all the existing drive (with all content). All drives are still NTFS file system (I have 4 drive: 3 are NTFS another 1 is ext4). Tried to repair with gparted partition tool, but came with no luck. Also tried to reinstall windows but installer don't show any available drive. What to do? (something to do with Ubuntu?)

    Read the article

  • Make a compiled binary run at native speed flawlessly without recompiling from source on a another system?

    - by unknownthreat
    I know that many people, at a first glance of the question, may immediately yell out "Java", but no, I know Java's qualities. Allow me to elaborate my question first. Normally, when we want our program to run at a native speed on a system, whether it be Windows, Mac OS X, or Linux, we need to compile from source codes. If you want to run a program of another system in your system, you need to use a virtual machine or an emulator. While these tools allow you to use the program you need on the non-native OS, they sometimes have problems of performance and glitches. We also have a newer compiler called "JIT Compiler", where the compiler will parse the bytecode program to native machine language before execution. The performance may increase to a very good extent with JIT Compiler, but the performance is still not the same as running it on a native system. Another program on Linux, WINE, is also a good tool for running Windows program on Linux system. I have tried running Team Fortress 2 on it, and tried experiment with some settings. I got ~40 fps on Windows at its mid-high setting on 1280 x 1024. On Linux, I need to turn everything low at 1280 x 1024 to get ~40 fps. There are 2 notable things though: Polygon model settings do not seem to affect framerate whether I set it low or high. When there are post-processing effects or some special effects that require manipulation of drawn pixels of the current frame, the framerate will drop to 10-20 fps. From this point, I can see that normal polygon rendering is just fine, but when it comes to newer rendering methods that requires graphic card to the job, it slows down to a crawl. Anyway, this question is rather theoretical. Is there anything we can do at all? I see that WINE can run STEAM and Team Fortress 2. Although there are flaws, they can run at lower setting. Or perhaps, I should also ask, "is it possible to translate one whole program on a system to another system without recompiling from source and get native speed?" I see that we also have AOT Compiler, is it possible to use it for something like this? Or there are so many constraints (such as DirectX call or differences in software architecture) that make it impossible to have a flawless and not native to the system program that runs at native speed?

    Read the article

  • Asp.NET ReportViewer “report execution has expired or cannot be found” error when using session state service or SQL Server session state

    - by dotneteer
    We encountered an error like: ReportServerException: The report execution x5pl2245iwvvq055khsxzlj5 has expired or cannot be found. (rsExecutionNotFound)]    Microsoft.Reporting.WebForms.ServerReportSoapProxy.OnSoapException(SoapException e) +72    Microsoft.Reporting.WebForms.Internal.Soap.ReportingServices2005.Execution.ProxyMethodInvocation.Execute(RSExecutionConnection connection, ProxyMethod`1 initialMethod, ProxyMethod`1 retryMethod) +428    Microsoft.Reporting.WebForms.Internal.Soap.ReportingServices2005.Execution.RSExecutionConnection.GetExecutionInfo() +133    Microsoft.Reporting.WebForms.ServerReport.EnsureExecutionSession() +197    Microsoft.Reporting.WebForms.ServerReport.LoadViewState(Object viewStateObj) +256    Microsoft.Reporting.WebForms.ServerReport..ctor(SerializationInfo info, StreamingContext context) +355 [TargetInvocationException: Exception has been thrown by the target of an invocation.]    System.RuntimeMethodHandle._SerializationInvoke(Object target, SignatureStruct&amp; declaringTypeSig, SerializationInfo info, StreamingContext context) +0    System.Reflection.RuntimeConstructorInfo.SerializationInvoke(Object target, SerializationInfo info, StreamingContext context) +108    System.Runtime.Serialization.ObjectManager.CompleteISerializableObject(Object obj, SerializationInfo info, StreamingContext context) +273    System.Runtime.Serialization.ObjectManager.FixupSpecialObject(ObjectHolder holder) +49    System.Runtime.Serialization.ObjectManager.DoFixups() +223    System.Runtime.Serialization.Formatters.Binary.ObjectReader.Deserialize(HeaderHandler handler, __BinaryParser serParser, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) +188    System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Stream serializationStream, HeaderHandler handler, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) +203    System.Web.Util.AltSerialization.ReadValueFromStream(BinaryReader reader) +788    System.Web.SessionState.SessionStateItemCollection.ReadValueFromStreamWithAssert() +55    System.Web.SessionState.SessionStateItemCollection.DeserializeItem(String name, Boolean check) +281    System.Web.SessionState.SessionStateItemCollection.DeserializeItem(Int32 index) +110    System.Web.SessionState.SessionStateItemCollection.get_Item(Int32 index) +17    System.Web.SessionState.HttpSessionStateContainer.get_Item(Int32 index) +13    System.Web.Util.AspCompatApplicationStep.OnPageStartSessionObjects() +71    System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2065 This error occurs long after the report viewer page has closed. It occurs to any asp.net page in the application, rendering the entire application unusable until the user gets a new session. The cause of the problem is that the ReportViewer uses session state. When a page retrieves session from any out-of-state session, the session variable of type Microsoft.Reporting.WebForms.ReportHierarchy is deserialized from the session storage. The deserialization could cause the object to connect to the report server when the report is no longer available. The solution is simple but not pretty. We need to clean up the session variable when the report viewer page is closed. One way is to add some Javascript to the page to handle the window.onunload event. In the event handler, call a web service to clean up the session variable. The name of the session variable appears to be randomly generated. So we need to loop through the session variable to find a variable of the type Microsoft.Reporting.WebForms.ReportHierarchy. Microsoft has implemented pinging between the report viewer and the report server to keep the report alive on the server when the report viewer is up; I hope they will go one step further to take care of this problem.

    Read the article

< Previous Page | 277 278 279 280 281 282 283 284 285 286 287 288  | Next Page >