Search Results

Search found 15438 results on 618 pages for 'static allocation'.

Page 507/618 | < Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >

  • run time error wats the wrong?

    - by javacode
    I am getting run time error import javax.mail.*; import javax.mail.internet.*; import java.util.*; public class SendMail { public static void main(String [] args)throws MessagingException { SendMail sm=new SendMail(); sm.postMail(new String[]{"[email protected]"},"hi","hello","[email protected]"); } public void postMail( String recipients[ ], String subject, String message , String from) throws MessagingException { boolean debug = false; //Set the host smtp address Properties props = new Properties(); props.put("mail.smtp.host", "webmail.emailmyname.com"); // create some properties and get the default Session Session session = Session.getDefaultInstance(props, null); session.setDebug(debug); // create a message Message msg = new MimeMessage(session); // set the from and to address InternetAddress addressFrom = new InternetAddress(from); msg.setFrom(addressFrom); InternetAddress[] addressTo = new InternetAddress[recipients.length]; for (int i = 0; i < recipients.length; i++) { addressTo[i] = new InternetAddress(recipients[i]); } msg.setRecipients(Message.RecipientType.TO, addressTo); // Optional : You can also set your custom headers in the Email if you Want msg.addHeader("MyHeaderName", "myHeaderValue"); // Setting the Subject and Content Type msg.setSubject(subject); msg.setContent(message, "text/plain"); Transport.send(msg); } } Error Exception in thread "main" java.lang.NoClassDefFoundError: SendMail Caused by: java.lang.ClassNotFoundException: SendMail at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) Could not find the main class: SendMail. Program will exit.

    Read the article

  • what makes a Tomcat5.5 cannot be "aware" of new Java Web Applications?

    - by Michael Mao
    This is for uni homework, but I reckon it is more a generic problem to the Tomcat Server(version 5.5.27) on my uni. The problem is, I first did a skeleton Java Web Application (Just a simple Servlet and a welcome-file, nothing complicated, no lib included) using NetBeans 6.8 with the bundled Tomcat 6.0.20 (localhost:8084/WSD) Then, to test and prove it is "portable" and "auto-deploy-able", I cleaned and built a WSD.war file and dropped it onto my Xampp Tomcat (localhost:8080/WSD). The war extracted everything accordingly and I can see identical output from this Tomcat. So far, so good. However, after I tried to drop to war onto uni server, funny thing happens: uni server Even though I've changed the war permission to 755, it is simply not "responding". I then copied the extracted files to uni server, the MainServlet cannot be recognized from within its Context Path "/WSD", basically nothing works, expect the static index.jsp. I tried several times to stop and restart uni Tomcat, it doesn't help? I wonder what makes this happen? Is there anything I did wrong with my approach? To be frank I paid no attention to a server not under my control, and I am unfortunately not a real active day-to-day Java Programmer now. I understand the fundamentals of MVC, Servelets, JSPs, JavaBeans, but I really feel frustrated by this, as I cannot see why... Or, should I ask, a Java Web Application, after cleaned and built by NetBeans6.8, is self-contained and self-configured so ready to be deployed to any Java Web Container? I know, I can certainly program everything in plain old JSP, but this is soooo... unacceptable to myself... Update : I am now wondering if there is any free Tomcat Hosting so that I would like to see if my war file and/or my web app can go with them without any configuration at all?

    Read the article

  • Spring: Bean fails to read off values from external Properties file when using @Value annotation

    - by daydreamer
    XML Configuration <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd"> <util:properties id="mongoProperties" location="file:///storage/local.properties" /> <bean id="mongoService" class="com.business.persist.MongoService"></bean> </beans> and MongoService looks like @Service public class MongoService { @Value("#{mongoProperties[host]}") private String host; @Value("#{mongoProperties[port]}") private int port; @Value("#{mongoProperties[database]}") private String database; private Mongo mongo; private static final Logger LOGGER = LoggerFactory.getLogger(MongoService.class); public MongoService() throws UnknownHostException { LOGGER.info("host=" + host + ", port=" + port + ", database=" + database); mongo = new Mongo(host, port); } public void putDocument(@Nonnull final DBObject document) { LOGGER.info("inserting document - " + document.toString()); mongo.getDB(database).getCollection(getCollectionName(document)).insert(document, WriteConcern.SAFE); } I write my MongoServiceTest as public class MongoServiceTest { @Autowired private MongoService mongoService; public MongoServiceTest() throws UnknownHostException { mongoRule = new MongoRule(); } @Test public void testMongoService() { final DBObject document = DBContract.getUniqueQuery("001"); document.put(DBContract.RVARIABLES, "values"); document.put(DBContract.PVARIABLES, "values"); mongoService.putDocument(document); } and I see failures in tests as 12:37:25.224 [main] INFO c.s.business.persist.MongoService - host=null, port=0, database=null java.lang.NullPointerException at com.business.persist.MongoServiceTest.testMongoService(MongoServiceTest.java:40) Which means bean was not able to read the values from local.properties local.properties ### === MongoDB interaction === ### host="127.0.0.1" port=27017 database=contract How do I fix this? update It doesn't seem to read off the values even after creating setters/getters for the fields. I am really clueless now. How can I even debug this issue? Thanks much!

    Read the article

  • ASMX schema varies when using WCF Service

    - by Lijo
    Hi, I have a client (created using ASMX "Add Web Reference"). The service is WCF. The signature of the methods varies for the client and the Service. I get some unwanted parameteres to the method. Note: I have used IsRequired = true for DataMember. Service: [OperationContract] int GetInt(); Client: proxy.GetInt(out requiredResult, out resultBool); Could you please help me to make the schame non-varying in both WCF clinet and non-WCF cliet? Do we have any best practices for that? using System.ServiceModel; using System.Runtime.Serialization; namespace SimpleLibraryService { [ServiceContract(Namespace = "http://Lijo.Samples")] public interface IElementaryService { [OperationContract] int GetInt(); [OperationContract] int SecondTestInt(); } public class NameDecorator : IElementaryService { [DataMember(IsRequired=true)] int resultIntVal = 1; int firstVal = 1; public int GetInt() { return firstVal; } public int SecondTestInt() { return resultIntVal; } } } Binding = "basicHttpBinding" using NonWCFClient.WebServiceTEST; namespace NonWCFClient { class Program { static void Main(string[] args) { NonWCFClient.WebServiceTEST.NameDecorator proxy = new NameDecorator(); int requiredResult =0; bool resultBool = false; proxy.GetInt(out requiredResult, out resultBool); Console.WriteLine("GetInt___"+requiredResult.ToString() +"__" + resultBool.ToString()); int secondResult =0; bool secondBool = false; proxy.SecondTestInt(out secondResult, out secondBool); Console.WriteLine("SecondTestInt___" + secondResult.ToString() + "__" + secondBool.ToString()); Console.ReadLine(); } } } Please help.. Thanks Lijo

    Read the article

  • Enumerating pixel formats for adaptors and modes with OpenGL

    - by Robinson
    I'm trying to code an OpenGL path for my 3D engine. The D3D path enumerates all device adaptors, all modes (by mode I mean bit depth, dimensions, available windowed, and refresh rate) for each adaptor and then all pixel formats available for the given mode and adaptor, along side certain useful caps (shader version, filter types, etc.). So, I have broadly got the following protected functions in the class: // Enumerate all back/front buffer combinations. virtual void EnumerateBackFrontBufferCombinations(CComPtr<IDirect3D9>& d3d9); // Enumerate all depth/stencil formats. virtual void EnumerateDepthStencilFormats(CComPtr<IDirect3D9>& d3d9); // Enumerate all multi-sample formats. virtual void EnumerateMultiSampleTypes(CComPtr<IDirect3D9>& d3d9); // Enumerate all device formats, i.e. dynamic, static, render target, etc. virtual void EnumerateMapFormats(CComPtr<IDirect3D9>& d3d9); // Enumerate all capabilities. virtual void EnumerateCapabilities(CComPtr<IDirect3D9>& d3d9); The adaptors are enumerated with EnumDisplayDevices, the modes (resolutions and refresh rates) are enumerated with EnumDisplaySettings, so this can be done for either GL or D3D. The other functions I'm not so sure about with OpenGL. What are the equivalents to the IDirect3D9's CheckDeviceType, CheckDeviceFormat, CheckDeviceMultiSampleType, CheckDepthStencilMatch? I know I can use DescribePixelFormat, given a DC, but you kind-of need to have created the window before you can use a DC with it, but you can't create the window correctly until you know what formats you're going to use. Any tips you can give me? Thanks.

    Read the article

  • Using memcache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • jQuery $.getJSON - How do I parse a flickr.photos.search REST API call?

    - by Chad
    Trying to adapt the $.getJSON Flickr example: $.getJSON("http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&tagmode=any&format=json&jsoncallback=?", function(data){ $.each(data.items, function(i,item){ $("<img/>").attr("src", item.media.m).appendTo("#images"); if ( i == 3 ) return false; }); }); to read from the flickr.photos.search REST API method, but the JSON response is different for this call. Click here to see the JSON response. This is what I've done so far: var url = "http://api.flickr.com/services/rest/?method=flickr.photos.search&api_key=9322c53dde3b36bda33f79c16bb99104&tags=yokota+air+base&safe_search=1&per_page=20"; var src; $.getJSON(url + "&format=json&jsoncallback=?", function(data){ $.each(data.photos, function(i,item){ src = "http://farm"+ item.photo.farm +".static.flickr.com/"+ item.photo.server +"/"+ item.photo.id +"_"+ item.photo.secret +"_m.jpg"; $("<img/>").attr("src", src).appendTo("#images"); if ( i == 3 ) return false; }); }); I guess I'm not building the image src correctly. Couldn't find any documentation on how to build the image src, based on what the JSON response is. How do you parse a flickr.photos.search REST API call?

    Read the article

  • PHP + MYSQLI: Variable parameter/result binding with prepared statements.

    - by Brian Warshaw
    In a project that I'm about to wrap up, I've written and implemented an object-relational mapping solution for PHP. Before the doubters and dreamers cry out "how on earth?", relax -- I haven't found a way to make late static binding work -- I'm just working around it in the best way that I possibly can. Anyway, I'm not currently using prepared statements for querying, because I couldn't come up with a way to pass a variable number of arguments to the bind_params() or bind_result() methods. Why do I need to support a variable number of arguments, you ask? Because the superclass of my models (think of my solution as a hacked-up PHP ActiveRecord wannabe) is where the querying is defined, and so the find() method, for example, doesn't know how many parameters it would need to bind. Now, I've already thought of building an argument list and passing a string to eval(), but I don't like that solution very much -- I'd rather just implement my own security checks and pass on statements. Does anyone have any suggestions (or success stories) about how to get this done? If you can help me solve this first problem, perhaps we can tackle binding the result set (something I suspect will be more difficult, or at least more resource-intensive if it involves an initial query to determine table structure).

    Read the article

  • How to scale JPEG images with a non-standard sampling factor in Java?

    - by HRJ
    I am using Java AWT for scaling a JPEG image, to create thumbnails. The code works fine when the image has a normal sampling factor ( 2x2,1x1,1x1 ) However, an image which has this sampling factor ( 1x1, 1x1, 1x1 ) creates problem when scaled. The colors get corrupted though the features are recognizable. The original and the thumbnail: The code I am using is roughly equivalent to: static BufferedImage awtScaleImage(BufferedImage image, int maxSize, int hint) { // We use AWT Image scaling because it has far superior quality // compared to JAI scaling. It also performs better (speed)! System.out.println("AWT Scaling image to: " + maxSize); int w = image.getWidth(); int h = image.getHeight(); float scaleFactor = 1.0f; if (w > h) scaleFactor = ((float) maxSize / (float) w); else scaleFactor = ((float) maxSize / (float) h); w = (int)(w * scaleFactor); h = (int)(h * scaleFactor); // since this code can run both headless and in a graphics context // we will just create a standard rgb image here and take the // performance hit in a non-compatible image format if any Image i = image.getScaledInstance(w, h, hint); image = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB); Graphics2D g = image.createGraphics(); g.drawImage(i, null, null); g.dispose(); i.flush(); return image; } (Code courtesy of this page ) Is there a better way to do this? Here's a test image with sampling factor of [ 1x1, 1x1, 1x1 ].

    Read the article

  • Calling DI Container directly in method code (MVC Actions)

    - by fearofawhackplanet
    I'm playing with DI (using Unity). I've learned how to do Constructor and Property injection. I have a static container exposed through a property in my Global.asax file (MvcApplication class). I have a need for a number of different objects in my Controller. It doesn't seem right to inject these throught the constructor, partly because of the high quantity of them, and partly because they are only needed in some Actions methods. The question is, is there anything wrong with just calling my container directly from within the Action methods? public ActionResult Foo() { IBar bar = (Bar)MvcApplication.Container.Resolve(IBar); // ... Bar uses a default constructor, I'm not actually doing any // injection here, I'm just telling my conatiner to give me Bar // when I ask for IBar so I can hide the existence of the concrete // Bar from my Controller. } This seems the simplest and most efficient way of doing things, but I've never seen an example used in this way. Is there anything wrong with this? Am I missing the concept in some way?

    Read the article

  • Why my async call does not work?

    - by Petr
    Hi, I am trying to understand what is IAsyncresult good and therefore I wrote this code. The problem is it behaves as I called "MetodaAsync" normal way. While debugging, the program stops here until the method completed. Any help appreciated, thank you. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading; namespace ConsoleApplication1 { class Program { delegate int Delegat(); static void Main(string[] args) { Program p=new Program(); Delegat d = new Delegat(p.MetodaAsync); IAsyncResult a = d.BeginInvoke(null, null); //I have removed callback int returned=d.EndInvoke(a); Console.WriteLine("AAA"); } private int MetodaAsync() { int AC=0; for (int I = 0; I < 600000; I++) { for (int A = 0; A < 6000000; A++) { } Console.Write("B"); } return AC; } } }

    Read the article

  • What is the best way to return result from business layer to presentation layer when using linq - I

    - by samsur
    I have a business layer that has DTOs that are used in the presentation layer. This application uses entity framework. Here is an example of a class called RoleDTO public class RoleDTO { public Guid RoleId { get; set; } public string RoleName { get; set; } public string RoleDescription { get; set; } public int? OrganizationId { get; set; } } In the BLL I want to have a method that returns a list of DTO.. I would like to know which is the better approach: returning IQueryable or list of DTOs. Although i feel that returning Iqueryable is not a good idea because the connection needs to be open. Here are the 2 different methods using the different approaches public class RoleBLL { private servicedeskEntities sde; public RoleBLL() { sde = new servicedeskEntities(); } public IQueryable<RoleDTO> GetAllRoles() { IQueryable<RoleDTO> role = from r in sde.Roles select new RoleDTO() { RoleId = r.RoleID, RoleName = r.RoleName, RoleDescription = r.RoleDescription, OrganizationId = r.OrganizationId }; return role; } Note: in the above method the datacontext is a private attribute and set in the constructor, so that the connection stays opened. Second approach public static List GetAllRoles() { List roleDTO = new List(); using (servicedeskEntities sde = new servicedeskEntities()) { var roles = from pri in sde.Roles select new { pri.RoleID, pri.RoleName, pri.RoleDescription }; //Add the role entites to the DTO list and return. This is necessary as anonymous types can be returned acrosss methods foreach (var item in roles) { RoleDTO roleItem = new RoleDTO(); roleItem.RoleId = item.RoleID; roleItem.RoleDescription = item.RoleDescription; roleItem.RoleName = item.RoleName; roleDTO.Add(roleItem); } return roleDTO; } Please let me know, if there is a better approach - Thanks,

    Read the article

  • Playing with bytes...need to convert from java to C#

    - by ibiza
    Hi fellow programmers, I am not used to manipulate bytes in my code and I have this piece of code that is written in Java and I would need to convert it to its C# equivalent : protected static final int putLong(final byte[] b, final int off, final long val) { b[off + 7] = (byte) (val >>> 0); b[off + 6] = (byte) (val >>> 8); b[off + 5] = (byte) (val >>> 16); b[off + 4] = (byte) (val >>> 24); b[off + 3] = (byte) (val >>> 32); b[off + 2] = (byte) (val >>> 40); b[off + 1] = (byte) (val >>> 48); b[off + 0] = (byte) (val >>> 56); return off + 8; } Thanks in advance for all your help, I am looking forward to learn from this.

    Read the article

  • Compiling code at runtime, loading into current appdomain.

    - by Richard Friend
    Hi Im compiling some code at runtime then loading the assembly into the current appdomain, however when i then try to do Type.GetType it cant find the type... Here is how i compile the code... public static Assembly CompileCode(string code) { Microsoft.CSharp.CSharpCodeProvider provider = new CSharpCodeProvider(); ICodeCompiler compiler = provider.CreateCompiler(); CompilerParameters compilerparams = new CompilerParameters(); compilerparams.GenerateExecutable = false; compilerparams.GenerateInMemory = false; foreach (Assembly assembly in AppDomain.CurrentDomain.GetAssemblies()) { try { string location = assembly.Location; if (!String.IsNullOrEmpty(location)) { compilerparams.ReferencedAssemblies.Add(location); } } catch (NotSupportedException) { // this happens for dynamic assemblies, so just ignore it. } } CompilerResults results = compiler.CompileAssemblyFromSource(compilerparams, code); if (results.Errors.HasErrors) { StringBuilder errors = new StringBuilder("Compiler Errors :\r\n"); foreach (CompilerError error in results.Errors) { errors.AppendFormat("Line {0},{1}\t: {2}\n", error.Line, error.Column, error.ErrorText); } throw new Exception(errors.ToString()); } else { AppDomain.CurrentDomain.Load(results.CompiledAssembly.GetName()); return results.CompiledAssembly; } } This bit fails after getting the type from the compiled assembly just fine, it does not seem to be able to find it using Type.GetType.... Assembly assem = RuntimeCodeCompiler.CompileCode(code); string typeName = String.Format("Peverel.AppFramework.Web.GenCode.ObjectDataSourceProxy_{0}", safeTypeName); Type t = assem.GetType(typeName); //This works just fine.. Type doesntWork = Type.GetType(t.AssemblyQualifiedName); Type doesntWork2 = Type.GetType(t.Name); ....

    Read the article

  • Setting the default stack size on Linux globally for the program

    - by wowus
    So I've noticed that the default stack size for threads on linux is 8MB (if I'm wrong, PLEASE correct me), and, incidentally, 1MB on Windows. This is quite bad for my application, as on a 4-core processor that means 64 MB is space is used JUST for threads! The worst part is, I'm never using more than 100kb of stack per thread (I abuse the heap a LOT ;)). My solution right now is to limit the stack size of threads. However, I have no idea how to do this portably. Just for context, I'm using Boost.Thread for my threading needs. I'm okay with a little bit of #ifdef hell, but I'd like to know how to do it easily first. Basically, I want something like this (where windows_* is linked on windows builds, and posix_* is linked under linux builds) // windows_stack_limiter.c int limit_stack_size() { // Windows impl. return 0; } // posix_stack_limiter.c int limit_stack_size() { // Linux impl. return 0; } // stack_limiter.cpp int limit_stack_size(); static volatile int placeholder = limit_stack_size(); How do I flesh out those functions? Or, alternatively, am I just doing this entirely wrong? Remember I have no control over the actual thread creation (no new params to CreateThread on Windows), as I'm using Boost.Thread.

    Read the article

  • Unit Testing-- fundamental goal?

    - by David
    Me and my co-workers had a bit of a disagreement last night about unit testing in our PHP/MySQL application. Half of us argued that when unit testing a function within a class, you should mock everything outside of that class and its parents. The other half of us argued that you SHOULDN'T mock anything that is a direct dependancy of the class either. The specific example was our logging mechanism, which happened through a static Logging class, and we had a number of Logging::log() calls in various locations throughout our application. The first half of us said the Logging mechanism should be faked (mocked) because it would be tested in the Logging unit tests. The second half of us argued that we should include the original Logging class in our unit test so that if we make a change to our logging interface, we'll be able to see if it creates problems in other parts of the application due to failing to update the call interface. So I guess the fundamental question is-- do unit tests serve to test the functionality of a single unit in a closed environment, or show the consequences of changes to a single unit in a larger environment? If it's one of these, how do you accomplish the other?

    Read the article

  • nginx multiple domain virtual host configuration

    - by Poe
    I'm setting up nginx with multiple domain or wildcard support for convenience sake, rather than setting up 50+ different sites-available/* files. Hopefully this is enough to show you what I'm trying to do. Some are static sites, some are dynamic with usually wordpress installed. If an index.php exists, everything works as expected. If a file is requested that does not exist (missing.html), a 500 error is given due to the rewrite. The logged error is: *112 rewrite or internal redirection cycle while processing "/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/index.php/missing.html" The basic nginx configuration I'm currently using is: ` listen 80 default; server _; ... location / { root /var/www/$host; if (-f $request_filename) { expires max; break; } # problem, what if index.php does not exist? if (!-e $request_filename) { rewrite ^/(.*)$ /index.php/$1 last; } } ... ` If an index.php does not exist, and the file also does not exist, I would like it to error 404. Currently, nginx does not support multiple condition if's or nested if so I need a workaround.

    Read the article

  • How to create an instance of object with RTTI in Delphi 2010?

    - by Paul
    As we all known, when we call a constructor of a class like this: instance := TSomeClass.Create; The Delphi compiler actually do the following things: Call the static NewInstance method to allocate memory and initialize the memory layout. Call the constructor method to perform the initialization of the class Call the AfterConstruction method It's simple and easy to understand. but I'm not very sure how the compiler handle exceptions in the second and the third step. It seems there are no explicit way to create an instance using a RTTI constructor method in D2010. so I wrote a simple function in the Spring Framework for Delphi to reproduce the process of the creation. class function TActivator.CreateInstance(instanceType: TRttiInstanceType; constructorMethod: TRttiMethod; const arguments: array of TValue): TObject; var classType: TClass; begin TArgument.CheckNotNull(instanceType, 'instanceType'); TArgument.CheckNotNull(constructorMethod, 'constructorMethod'); classType := instanceType.MetaclassType; Result := classType.NewInstance; try constructorMethod.Invoke(Result, arguments); except on Exception do begin if Result is TInterfacedObject then begin Dec(TInterfacedObjectHack(Result).FRefCount); end; Result.Free; raise; end; end; try Result.AfterConstruction; except on Exception do begin Result.Free; raise; end; end; end; I feel it maybe not 100% right. so please show me the way. Thanks!

    Read the article

  • WCF client binding configuration in program code

    - by smarsha
    I have the following class that configures security, encoding, and token parameters but I am having trouble adding a BasicHttpBinding to specify a MaxReceivedMessageSize. Any insight would be appreciated. public class MultiAuthenticationFactorBinding { public static Binding CreateMultiFactorAuthenticationBinding() { HttpsTransportBindingElement httpTransport = new HttpsTransportBindingElement(); CustomBinding binding = new CustomBinding(); binding.Name = "myCustomBinding"; TransportSecurityBindingElement messageSecurity = TransportSecurityBindingElement.CreateUserNameOverTransportBindingElement(); messageSecurity.AllowInsecureTransport = true; messageSecurity.EnableUnsecuredResponse = true; messageSecurity.MessageSecurityVersion = MessageSecurityVersion.WSSecurity11WSTrust13WSSecureConversation13WSSecurityPolicy12; messageSecurity.SecurityHeaderLayout = SecurityHeaderLayout.Strict; messageSecurity.IncludeTimestamp = true; messageSecurity.SetKeyDerivation(false); TextMessageEncodingBindingElement Quota = new TextMessageEncodingBindingElement(MessageVersion.Soap11, System.Text.Encoding.UTF8); Quota.ReaderQuotas.MaxDepth = 32; Quota.ReaderQuotas.MaxStringContentLength = Int32.MaxValue; Quota.ReaderQuotas.MaxArrayLength = 16384; Quota.ReaderQuotas.MaxBytesPerRead = 4096; Quota.ReaderQuotas.MaxNameTableCharCount = 16384; X509SecurityTokenParameters clientX509SupportingTokenParameters = new X509SecurityTokenParameters(); clientX509SupportingTokenParameters.InclusionMode = SecurityTokenInclusionMode.AlwaysToRecipient; clientX509SupportingTokenParameters.RequireDerivedKeys = false; messageSecurity.EndpointSupportingTokenParameters.Endorsing.Add(clientX509SupportingTokenParameters); //binding.ReceiveTimeout = new TimeSpan(0,0,300); binding.Elements.Add(Quota); binding.Elements.Add(messageSecurity); binding.Elements.Add(httpTransport); return binding; } }

    Read the article

  • Text piped to PowerShell.exe isn't recieved when using [Console]::ReadLine()

    - by crtracy
    I'm getting itermittent data loss when calling .NET [Console]::ReadLine() to read piped input to PowerShell.exe: >ping localhost | powershell -NonInteractive -NoProfile -C "do {$line = [Console]::ReadLine(); ('' + (Get-Date -f 'HH:mm :ss') + $line) | Write-Host; } while ($line -ne $null)" 23:56:45time<1ms 23:56:45 23:56:46time<1ms 23:56:46 23:56:47time<1ms 23:56:47 23:56:47 Normally 'ping localhost' from Vista64 looks like this, so there is a lot of data missing from the output above: Pinging WORLNTEC02.bnysecurities.corp.local [::1] from ::1 with 32 bytes of data: Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Ping statistics for ::1: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms But using the same API from C# recieves all the data sent to the process (excluding some newline differences). Code: namespace ConOutTime { class Program { static void Main (string[] args) { string s; while ((s = Console.ReadLine ()) != null) { if (s.Length > 0) // don't write time for empty lines Console.WriteLine("{0:HH:mm:ss} {1}", DateTime.Now, s); } } } } Output: 00:44:30 Pinging WORLNTEC02.bnysecurities.corp.local [::1] from ::1 with 32 bytes of data: 00:44:30 Reply from ::1: time<1ms 00:44:31 Reply from ::1: time<1ms 00:44:32 Reply from ::1: time<1ms 00:44:33 Reply from ::1: time<1ms 00:44:33 Ping statistics for ::1: 00:44:33 Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), 00:44:33 Approximate round trip times in milli-seconds: 00:44:33 Minimum = 0ms, Maximum = 0ms, Average = 0ms So, if calling the same API from PowerShell instead of C# many parts of StdIn get 'eaten'. Is the PowerShell host reading string from StdIn even though I didn't use 'PowerShell.exe -Command -'?

    Read the article

  • How can I tell whether a webpart that has been deployed to a site is a native webpart that ships wit

    - by program247365
    I have a SharePoint 2007 MOSS instance, and I'm on a fact-finding mission. There have been multiple developers, developing multiple webparts and deploying them (using VS2005/2008 SharePoint Extensions). I thought maybe I could look at the fields in the "Web Part Gallery" list in my site, and look by "Modified by", but it looks like a developer's name is on some of the out-of-the-box webparts somehow, and on ones I know are custom developed, they say "System Account" - so looking at that field in this list is a no go. I thought then maybe I could look at the "Group" to which each webpart was assigned but it looks like they were arbitrarily assigned to many different groups inconsistently - so using that piece of information is a no go. Here is my code I have for just looping through and getting the names of all the webparts. Is there any property I can access on the list items of webparts that would tell me whether it's a custom developed webpart? Any way to distinguish the custom webparts from the out-of-the-box ones? Is there another way to do this? #region Misc Site Collection Methods public static List<string> GetAllWebParts(string connectedSPInstanceUrl) { List<string> lstWebParts = new List<string>(); try { using (SPSite site = new SPSite(connectedSPInstanceUrl)) { using (SPWeb web = site.OpenWeb()) { SPList list = web.Lists["Web Part Gallery"]; foreach (SPListItem item in list.Items) { lstWebParts.Add(item.Name); } } } } catch (Exception ex) { lstWebParts.Add("Error"); lstWebParts.Add("Message: " + ex.Message); lstWebParts.Add("Inner Exception: " + ex.InnerException.ToString()); lstWebParts.Add("Stack Trace: " + ex.StackTrace); } return lstWebParts; } #endregion

    Read the article

  • implementing gravity to projectile - delta time issue

    - by Murat Nafiz
    I'm trying to implement a simple projectile motion in Android (with openGL). And I want to add gravity to my world to simulate a ball's dropping realistically. I simply update my renderer with a delta time which is calculated by: float deltaTime = (System.nanoTime()-startTime) / 1000000000.0f; startTime = System.nanoTime(); screen.update(deltaTime); In my screen.update(deltaTime) method: if (isballMoving) { golfBall.updateLocationAndVelocity(deltaTime); } And in golfBall.updateLocationAndVelocity(deltaTime) method: public final static double G = -9.81; double vz0 = getVZ0(); // Gets initial velocity(z) double z0 = getZ0(); // Gets initial height double time = getS(); // gets total time from act begin double vz = vz0 + G * deltaTime; // calculate new velocity(z) double z = z0 - vz0 * deltaTime- 0.5 * G * deltaTime* deltaTime; // calculate new position time = time + deltaTime; // Update time setS(time); //set new total time Now here is the problem; If I set deltaTime as 0.07 statically, then the animation runs normally. But since the update() method runs as faster as it can, the length and therefore the speed of the ball varies from device to device. If I don't touch deltaTime and run the program (deltaTime's are between 0.01 - 0.02 with my test devices) animation length and the speed of ball are same at different devices. But the animation is so SLOW! What am I doing wrong?

    Read the article

  • Retrieve only the superclass from a class hierarchy

    - by user1792724
    I have an scenario as the following: @Entity @Table(name = "ANIMAL") @Inheritance(strategy = InheritanceType.JOINED) public class Animal implements Serializable { @Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "S_ANIMAL") @SequenceGenerator(name = "S_ANIMAL", sequenceName = "S_ANIMAL", allocationSize = 1) public int getNumero() { return numero; } public void setNumero(int numero) { this.numero = numero; } . . . } and as the subclass: @Entity @Table(name = "DOG") public class Dog extends Animal { private static final long serialVersionUID = -7341592543130659641L; . . . } I have a JPA Select statement like this: SELECT a FROM Animal a; I'm using Hibernate 3.3.1 As I can see the framework retrieves instances of Animal and also of Dog using a left outer join. Is there a way to Select only the "part" Animal? I mean, the previous Select will get all the Animals, those that are only Animals but not Dogs and those that are Dogs. I want them all, but in the case of Dogs I want to only retrieve the "Animal part" of them. I found the @org.hibernate.annotations.Entity(polymorphism = PolymorphismType.EXPLICIT) but as I could see this only works if Animal isn't an @Entity. Thanks a lot.

    Read the article

  • How can I create a "dynamic" WHERE clause?

    - by TheChange
    Hello there, First: Thanks! I finished my other project and the big surprise: now everything works as it should :-) Thanks to some helpful thinkers of SO! So here I go with the next project. I'd like to get something like this: SELECT * FROM tablename WHERE field1=content AND field2=content2 ... As you noticed this can be a very long where-clause. tablename is a static property which does not change. field1, field2 , ... (!) and the contents can change. So I need an option to build up a SQL statement in PL/SQL within a recursive function. I dont really know what to search for, so I ask here for links or even a word to search for.. Please dont start to argue about wether the recursive function is really needed or what its disadvanteges - this is not in question ;-) If you could help me to create something like an SQL-String which will later be able to do a successful SELECT this would be very nice! Iam able to go through the recursive function and make a longer string each time, but I cannot make an SQL statement from it.. Oh, one additional thing: I get the fields and contents by a xmlType (xmldom.domdocument etc) I can get the field and the content for example in a clob from the xmltype

    Read the article

  • NSCFString leak inVolving NSString

    - by Srilakshmi Manthena
    Hi, I am getting leak at NSString *firstNameStr = [NSString stringWithFormat:@"%s",firstNameString]; CODE: +(NSString *)getValueForProperty:(ABPropertyID)propertyId forContact:(NSString *)contactId { if (addressBook == nil) { addressBook = ABAddressBookCreate(); } ABRecordID contactIntId = [contactId intValue]; ABRecordRef person = ABAddressBookGetPersonWithRecordID(addressBook, contactIntId); CFStringRef firstName; char *firstNameString; firstName = ABRecordCopyValue(person, propertyId); // Paso a char* los datos para que se puedan escribir static char* fallback = ""; int fbLength = strlen(fallback); int firstNameLength = fbLength; bool firstNameFallback = true; if (firstName != NULL) { firstNameLength = (int) CFStringGetLength(firstName); firstNameFallback = false; } if (firstNameLength == 0) { firstNameLength = fbLength; firstNameFallback = true; } firstNameString = malloc(sizeof(char)*(firstNameLength+1)); if (firstNameFallback == true) { strcpy(firstNameString, fallback); } else { CFStringGetCString(firstName, firstNameString, 10*CFStringGetLength(firstName), kCFStringEncodingASCII); } if (firstName != NULL) { CFRelease(firstName); } NSString *firstNameStr = [NSString stringWithFormat:@"%s",firstNameString]; free(firstNameString); return firstNameStr; }

    Read the article

< Previous Page | 503 504 505 506 507 508 509 510 511 512 513 514  | Next Page >