Search Results

Search found 2361 results on 95 pages for 'incorrect'.

Page 66/95 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Incompatibility between x86 and x64 in Installation solution

    - by rodnower
    Hello, I have installation solution that have installer project (not web installer but simple installer) that installs NT services, web service and web sites with help of additional two projects of dlls with my own code that performs my installation step. In user actions of installer project I call installer function of one of those projects, and this project calls to installer of second project: installer - MiddleCaller - InstallationCore. All this developing on Windows 7 and work fine when I compile all in 32 bit. The project must run on Windows 2008. Because of some reasons all must be in x64 bit. For this purpose, in MiddleCaller and InstallationCore I click right button of mouse on project - build - targer x64. For to move installer project to 64 bit in properties of installer (when project is active) I check: Target platform: x64. When I run installation on x86 I get error: The installation package is not supported by this processor type" And this is good, because now I know that my installation compiled in 64 bit, but when I run this on windows 2008 I get: Error 1001. Exception occured while initializing the instance: System.BadImageFormatException: could not load file or Assembly 'MiddleCaller, v...' or one of its dependencies. An attempt was made to load a program with an incorrect format. Any one has some idea what I need to do for run fine the installation on x64? May be I still not moved the installer project to x64 bit, if yes, where I do this? Thank you for ahead.

    Read the article

  • solve a classic map-reduce problem with opencl?

    - by liuliu
    I am trying to parallel a classic map-reduce problem (which can parallel well with MPI) with OpenCL, namely, the AMD implementation. But the result bothers me. Let me brief about the problem first. There are two type of data that flow into the system: the feature set (30 parameters for each) and the sample set (9000+ dimensions for each). It is a classic map-reduce problem in the sense that I need to calculate the score of every feature on every sample (Map). And then, sum up the overall score for every feature (Reduce). There are around 10k features and 30k samples. I tried different ways to solve the problem. First, I tried to decompose the problem by features. The problem is that the score calculation consists of random memory access (pick some of the 9000+ dimensions and do plus/subtraction calculations). Since I cannot coalesce memory access, it costs. Then, I tried to decompose the problem by samples. The problem is that to sum up overall score, all threads are competing for few score variables. It keeps overwriting the score which turns out to be incorrect. (I cannot carry out individual score first and sum up later because it requires 10k * 30k * 4 bytes). The first method I tried gives me the same performance on i7 860 CPU with 8 threads. However, I don't think the problem is unsolvable: it is remarkably similar to ray tracing problem (for which you carry out calculation that millions of rays against millions of triangles). Any ideas?

    Read the article

  • Deployed Qt5 Application Doesn't Print or Show Print Dialog

    - by MustacheMcLimey
    I'm experiencing Qt4 to Qt5 troubles. In my application when the user clicks the print button two things should happen, one is that a PDF gets written to disk (which still works fine in the new version, so I know that some of the printing functions are working properly) and the other is that a QPrintDialog should exec() and then send to a connected printer. I see the dialog when I launch from my development machine. The application launches on the deployed machine, but the QPrintDialog never shows and the document never prints. I am including print support. QT += core gui network webkitwidgets widgets printsupport I have been using Process Explorer to see what DLLs the application uses on my development machine, and I believe that everything is present. My application bundle includes: {myAppPath}\MyApp[MyApp.exe, Qt5PrintSupport.dll, ...] {myAppPath}\plugins\printsupport\windowsprintersupport.dll {myAppPath}\plugins\imageformats[ qgif.dll, qico.dll,qjpeg.dll, qmng.dll, qtga.dll, qtiff.dll, qwbmp.dll ] The following is the relevant code snippet: void PrintableForm::printFile() { //Writes the PDF to disk in every environment pdfCopy(); //Paper Copy only works on my dev machine QPrinter paperPrinter; QPrintDialog printDialog(&paperPrinter,this); if( printDialog.exec() == QDialog::Accepted ) { view->print(&paperPrinter); } this->accept(); } My first thought is that the relevant DLLs are not being found come print time, and that means that my application file system is incorrect, but I have not found anything that shows me a different file structure. Am I on the right track or is there something else wrong with this setup?

    Read the article

  • Explicit Type Conversion and Multiple Simple Type Specifiers

    - by James McNellis
    To value initialize an object of type T, one would do something along the lines of one of the following: T x = T(); T x((T())); My question concerns types specified by a combination of simple type specifiers, e.g., unsigned int: unsigned int x = unsigned int(); unsigned int x((unsigned int())); Visual C++ 2008 and Intel C++ Compiler 11.1 accept both of these without warnings; Comeau 4.3.10.1b2 and g++ 3.4.5 (which is, admittedly, not particularly recent) do not. According to the C++ standard (C++03 5.2.3/2, expr.type.conv): The expression T(), where T is a simple-type-specifier (7.1.5.2) for a non-array complete object type or the (possibly cv-qualified) void type, creates an rvalue of the specified type, which is value-initialized 7.1.5.2 says, "the simple type specifiers are," and follows with a list that includes unsigned and int. Therefore, given that in 5.2.3/2, "simple-type-specifier" is singular, and unsigned and int are two type specifiers, are the examples above that use unsigned int invalid? (and, if so, the followup is, is it incorrect for Microsoft and Intel to support said expressions?) This question is more out of curiosity than anything else; for all of the types specified by a combination of multiple simple type specifiers, value initialization is equivalent to zero initialization. (This question was prompted by comments in response to this answer to a question about initialization).

    Read the article

  • open flash chart rails x-axis issue

    - by Jimmy
    Hey guys, I am using open flash chart 2 in my rails application. Everything is looking smooth except for the range on my x axis. I am creating a line to represent cell phone plan cost over a specific amount of usage and I'm generate 8 values, 1-5 are below the allowed usage while 6-8 are demonstrations of the cost for usage over the limit. The problem I'm encountering is how to set the range of the X axis in ruby on rails to something specific to the data. Right now the values being displayed are the indexes of the array that I'm giving. When I try to hand a hash to the values the chart doesn't even load at all. So basically I need help getting a way to set the data for my line properly so that it displays correctly, right now it is treating every value as if it represents the x value of the index of the array. Here is a screen shot which may be a better description than what I am saying: http://i163.photobucket.com/albums/t286/Xeno56/Screenshot.png Note that those values are correct just the range on the x-axis is incorrect, it should be something like 100, 200, 300, 400, 500, 600, 700 Code: y = YAxis.new y.set_range(0,100, 20) x_legend = XLegend.new("Usage") x_legend.set_style('{font-size: 20px; color: #778877}') y_legend = YLegend.new("Cost") y_legend.set_style('{font-size: 20px; color: #770077}') chart =OpenFlashChart.new chart.set_x_legend(x_legend) chart.set_y_legend(y_legend) chart.y_axis = y line = Line.new line.text = plan.name line.width = 2 line.color = '#006633' line.dot_size = 2 line.values = generate_data(plan) chart.add_element(line) def generate_data(plan) values = [] #generate below threshold numbers 5.times do |x| usage = plan.usage / 5 * x cost = plan.cost * 10 values << cost end #generate above threshold numbers 3.times do |x| usage = plan.usage + ((plan.usage / 5) * x) cost = plan.cost + (usage * plan.overage) values << cost end return values end

    Read the article

  • Implicit conversion : const reference vs non-const reference vs non-reference

    - by Nawaz
    Consider this code, struct A {}; struct B { B(const A&) {} }; void f(B) { cout << "f()"<<endl; } void g(A &a) { cout << "g()" <<endl; f(a); //a is implicitly converted into B. } int main() { A a; g(a); } This compiles fine, runs fine. But if I change f(B) to f(B&), it doesn't compile. If I write f(const B&), it again compiles fine, runs fine. Why is the reason and rationale? Summary: void f(B); //okay void f(B&); //error void f(const B&); //okay I would like to hear reasons, rationale and reference(s) from the language specification, for each of these cases. Of course, the function signatures themselves are not incorrect. Rather A implicitly converts into B and const B&, but not into B&, and that causes the compilation error.

    Read the article

  • apache: cgi links lead to a "you have chosen to open foo.cgi", although scriptalias is set

    - by Xiong Chiamiov
    Following this guide on CentOS 5.2, just getting nagios set up for the first time. The main page shows up just fine, but when I try to view any of the pages that should be generated by a cgi process, firefox prompts me to save the .cgi instead, so apache's obviously not understanding that it needs to run the cgi and get back some html from it. The odd thing is, though, that, as far as I can tell, apache should be running these files as cgi. nagios.conf: # SAMPLE CONFIG SNIPPETS FOR APACHE WEB SERVER # Last Modified: 11-26-2005 # # This file contains examples of entries that need # to be incorporated into your Apache web server # configuration file. Customize the paths, etc. as # needed to fit your system. ScriptAlias /nagios/cgi-bin/ "/usr/lib/nagios/cgi/" # SSLRequireSSL Options +ExecCGI AddHandler cgi-script .cgi AllowOverride None Order allow,deny Allow from all # Order deny,allow # Deny from all # Allow from 127.0.0.1 AuthName "Nagios Access" AuthType Basic AuthUserFile /etc/nagios/htpasswd.users Require valid-user Alias /nagios "/usr/share/nagios/" # SSLRequireSSL DirectoryIndex index.php Options None AllowOverride None Order allow,deny Allow from all # Order deny,allow # Deny from all # Allow from 127.0.0.1 AuthName "Nagios Access" AuthType Basic AuthUserFile /etc/nagios/htpasswd.users Require valid-use Either the ScriptAlias directive or ExecCGI option should be triggering this, but neither of them seems to have any effect. This config file is being parsed by apache, because if I move it out of conf.d, /nagios gives a 404. The .cgi files are indeed in the /nagios/cgi-bin/ directory, so I didn't specify the incorrect directory. Searching seemed to only provide people who had difficulty with permissions, which is not the issue here. This seems to me to be a pretty basic thing, but even with the excellent apache documentation, I'm at a bit of a loss (been using cherokee too much lately :) ).

    Read the article

  • C# mvc2 client side form validation with xval, prevent post

    - by Rob
    I'm using xval to use client side validation in my asp.net mvc2 webapplication. Despite the errors it's giving when i enter text in a nummeric field it still tries to post the form to the database. The incorrect values are being replaced by 0 and saved to the database. But instead it shouldn't even be possible to try and submit the form. Can anyone help me out here? I've set the attributes as below; [Property] [ShowColumnInCrud(true, label = "FromPriceInCents")] [Required] //[Range(1, Int32.MaxValue)] public virtual Int32 FromPriceInCents{ get; set; } The controller catching the request looks as below; I'm getting no errors in this part. [AcceptVerbs(HttpVerbs.Post)] [Transaction] [ValidateInput(false)] public override ActionResult Create() { //some foo happens } My view looks like below; <div class="label"><label for="Price">FromPrice</label></div> <div class="field"> <%= Html.TextBox("FromPriceInCents")%> <%= Html.ValidationMessage("product.FromPriceInCents")%></div> And at the end of the view i have the following rule which in html code generates the correct validation rules <%= Html.ClientSideValidation<Product>("Product") %> I hope someone can helps me out with this issue, thanks in advance!

    Read the article

  • How can I test blades in MVC Turbine with Rhino Mocks?

    - by Brandon Linton
    I'm trying to set up blade unit tests in an MVC Turbine-derived site. The problem is that I can't seem to mock the IServiceLocator interface without hitting the following exception: System.BadImageFormatException: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B) at System.Reflection.Emit.TypeBuilder._TermCreateClass(Int32 handle, Module module) at System.Reflection.Emit.TypeBuilder.CreateTypeNoLock() at System.Reflection.Emit.TypeBuilder.CreateType() at Castle.DynamicProxy.Generators.Emitters.AbstractTypeEmitter.BuildType() at Castle.DynamicProxy.Generators.Emitters.AbstractTypeEmitter.BuildType() at Castle.DynamicProxy.Generators.InterfaceProxyWithTargetGenerator.GenerateCode(Type proxyTargetType, Type[] interfaces, ProxyGenerationOptions options) at Castle.DynamicProxy.DefaultProxyBuilder.CreateInterfaceProxyTypeWithoutTarget(Type interfaceToProxy, Type[] additionalInterfacesToProxy, ProxyGenerationOptions options) at Castle.DynamicProxy.ProxyGenerator.CreateInterfaceProxyTypeWithoutTarget(Type interfaceToProxy, Type[] additionalInterfacesToProxy, ProxyGenerationOptions options) at Castle.DynamicProxy.ProxyGenerator.CreateInterfaceProxyWithoutTarget(Type interfaceToProxy, Type[] additionalInterfacesToProxy, ProxyGenerationOptions options, IInterceptor[] interceptors) at Rhino.Mocks.MockRepository.MockInterface(CreateMockState mockStateFactory, Type type, Type[] extras) at Rhino.Mocks.MockRepository.CreateMockObject(Type type, CreateMockState factory, Type[] extras, Object[] argumentsForConstructor) at Rhino.Mocks.MockRepository.Stub(Type type, Object[] argumentsForConstructor) at Rhino.Mocks.MockRepository.<>c__DisplayClass1`1.<GenerateStub>b__0(MockRepository repo) at Rhino.Mocks.MockRepository.CreateMockInReplay<T>(Func`2 createMock) at Rhino.Mocks.MockRepository.GenerateStub<T>(Object[] argumentsForConstructor) at XXX.BladeTest.SetUp() Everything I search for regarding this error leads me to 32-bit vs. 64-bit DLL compilation issues, but MVC Turbine uses the service locator facade everywhere and we haven't had any other issues, just with using Rhino Mocks to attempt mocking it. It blows up on the second line of this NUnit set up method: IRotorContext _context; IServiceLocator _locator; [SetUp] public void SetUp() { _context = MockRepository.GenerateStub<IRotorContext>(); _locator = MockRepository.GenerateStub<IServiceLocator>(); _context.Expect(x => x.ServiceLocator).Return(_locator); } Just a quick aside; I've tried implementing a fake implementing IServiceLocator, thinking that I could just keep track of calls to the type registration methods. This won't work in our setup, because we extend the service locator's interface in such a way that if the type isn't Unity-based, the registration logic is not invoked.

    Read the article

  • Algorithm for converting hierarchical flat data (w/ ParentID) into sorted flat list w/ indentation l

    - by eagle
    I have the following structure: MyClass { guid ID guid ParentID string Name } I'd like to create an array which contains the elements in the order they should be displayed in a hierarchy (e.g. according to their "left" values), as well as a hash which maps the guid to the indentation level. For example: ID Name ParentID ------------------------ 1 Cats 2 2 Animal NULL 3 Tiger 1 4 Book NULL 5 Airplane NULL This would essentially produce the following objects: // Array is an array of all the elements sorted by the way you would see them in a fully expanded tree Array[0] = "Airplane" Array[1] = "Animal" Array[2] = "Cats" Array[3] = "Tiger" Array[4] = "Book" // IndentationLevel is a hash of GUIDs to IndentationLevels. IndentationLevel["1"] = 1 IndentationLevel["2"] = 0 IndentationLevel["3"] = 2 IndentationLevel["4"] = 0 IndentationLevel["5"] = 0 For clarity, this is what the hierarchy looks like: Airplane Animal Cats Tiger Book I'd like to iterate through the items the least amount of times possible. I also don't want to create a hierarchical data structure. I'd prefer to use arrays, hashes, stacks, or queues. The two objectives are: Store a hash of the ID to the indentation level. Sort the list that holds all the objects according to their left values. When I get the list of elements, they are in no particular order. Siblings should be ordered by their Name property. Update: This may seem like I haven't tried coming up with a solution myself and simply want others to do the work for me. However, I have tried coming up with three different solutions, and I've gotten stuck on each. One reason might be that I've tried to avoid recursion (maybe wrongly so). I'm not posting the partial solutions I have so far since they are incorrect and may badly influence the solutions of others.

    Read the article

  • ASP.NET MVC : strange POST behavior

    - by user93422
    ASP.NET MVC 2 app I have two actions on my controller (Toons): [GET] List [POST] Add App is running on IIS7 integration mode, so /Toons/List works fine. But when I do POST (that redirects to /Toons/List internally) it redirects (with 302 Object Moved) back to /Toons/Add. The problem goes away if I use .aspx hack (that works in IIS6/IIS7 classic mode). But without .aspx - GET work fine, but POST redirects me onto itself but with GET. What am I missing? I'm hosting with webhost4life.com and they did change IIS7 to integrated mode already. EDIT: The code works as expected using UltiDev Cassini server. EDIT: It turned out to be trailing-slash-in-URL issue. Somehow IIS7 doesn't route request properly if there is no slash at the end. EDET: Explanation of the behavior What happens is when I request (POST) /Toons/List (without trailing slash), IIS doesn't find the handler (I do not have knowledge to understand how exactly IIS does URL-to-handler mapping) and redirects the request (using 302 code) to /Toons/List/ (notice trailing slash). A browser, according to the HTTP specification, must redirect the request using same method (POST in this case), but instead it handles 302 as if it is 303 and issues GET request for the new URL. This is incorrect, but known behavior of most browsers. The solution is either to use .aspx-hack to make it unambiguous for IIS how to map requests to ASP.NET handler, or configure IIS to handle everything in the virtual directory using ASP.NET handler. Q: what is a better way to handle this?

    Read the article

  • Using an existing IQueryable to create a new dynamic IQueryable

    - by dnoxs
    I have a query as follows: var query = from x in context.Employees where (x.Salary > 0 && x.DeptId == 5) || x.DeptId == 2 order by x.Surname select x; The above is the original query and returns let's say 1000 employee entities. I would now like to use the first query to deconstruct it and recreate a new query that would look like this: var query = from x in context.Employees where ((x.Salary > 0 && x.DeptId == 5) || x.DeptId == 2) && (x,i) i % 10 == 0 order by x.Surname select x.Surname; This query would return 100 surnames. The syntax is probably incorrect, but what I need to do is attach an additional where clause and modify the select to a single field. I've been looking into the ExpressionVisitor but I'm not entirely sure how to create a new query based on an existing query. Any guidance would be appreciated. Thanks you.

    Read the article

  • jQuery UI Datepicker and Google Chrome not working

    - by zA
    Hi, I'm having some problems with jQueryUI Datepicker and Google Chrome. My datepicker is working as expected with IE8, Firefox and Safari. The problem is when clicking the datepicker connected textbox in Chrome. It gives me a crash page, "Oops, an error occurred...". On my page there's textbox with a datepicker. The datepicker is language dependent and it loads the correct language settings dynamically. The datepicker should also display the month and year dropdowns. The code is as follows $(function() {$.datepicker.setDefaults($.extend({ changeMonth: true, changeYear: true }, $.datepicker.regional['']));$('#<%= TextBoxBirthDate.ClientID %').datepicker($.datepicker.regional[$('#LabelRegionalSettings').val()]);}); If I only extend the datepicker with one option, i.e. changeYear, it works in Chrome. But if I add another option, i.e. changeMonth, the 'crash' in Chrome occurs. Is my code incorrect? If so, how do I fix it? Any help is greatly appreciated!

    Read the article

  • mercurial .hgrc notify hook

    - by Eeyore
    Could someone tell me what is incorrect in my .hgrc configuration? I am trying to use gmail to send a e-mail after each push and/or commit. .hgrc [paths] default = ssh://www.domain.com/repo/hg [ui] username = intern <[email protected]> ssh="C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [extensions] hgext.notify = [hooks] changegroup.notify = python:hgext.notify.hook incoming.notify = python:hgext.notify.hook [email] from = [email protected] [smtp] host = smtp.gmail.com username = [email protected] password = sure port = 587 tls = true [web] baseurl = http://dev/... [notify] sources = serve push pull bundle test = False config = /path/to/subscription/file template = \ndetails: {baseurl}{webroot}/rev/{node|short}\nchangeset: {rev}:{node|short}\nuser: {author}\ndate: {date|date}\ndescription:\n{desc}\n maxdiff = 300 Error Incoming comand failed for P/project. running ""C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [email protected] "hg -R repo/hg serve --stdio"" sending hello command sending between command remote: FATAL ERROR: Server unexpectedly closed network connection abort: no suitable response from remote hg! , error code: -1 running ""C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [email protected] "hg -R repo/hg serve --stdio"" sending hello command sending between command remote: FATAL ERROR: Server unexpectedly closed network connection abort: no suitable response from remote hg!

    Read the article

  • overriding enumeration base type using pragma or code change

    - by vprajan
    Problem: I am using a big C/C++ code base which works on gcc & visual studio compilers where enum base type is by default 32-bit(integer type). This code also has lots of inline + embedded assembly which treats enum as integer type and enum data is used as 32-bit flags in many cases. When compiled this code with realview ARM RVCT 2.2 compiler, we started getting many issues since realview compiler decides enum base type automatically based on the value an enum is set to. http://www.keil.com/support/man/docs/armccref/armccref_Babjddhe.htm For example, Consider the below enum, enum Scale { TimesOne, //0 TimesTwo, //1 TimesFour, //2 TimesEight, //3 }; This enum is used as a 32-bit flag. but compiler optimizes it to unsigned char type for this enum. Using --enum_is_int compiler option is not a good solution for our case, since it converts all the enum's to 32-bit which will break interaction with any external code compiled without --enum_is_int. This is warning i found in RVCT compilers & Library guide, The --enum_is_int option is not recommended for general use and is not required for ISO-compatible source. Code compiled with this option is not compliant with the ABI for the ARM Architecture (base standard) [BSABI], and incorrect use might result in a failure at runtime. This option is not supported by the C++ libraries. Question How to convert all enum's base type (by hand-coded changes) to use 32-bit without affecting value ordering? enum Scale { TimesOne=0x00000000, TimesTwo, // 0x00000001 TimesFour, // 0x00000002 TimesEight, //0x00000003 }; I tried the above change. But compiler optimizes this also for our bad luck. :( There is some syntax in .NET like enum Scale: int Is this a ISO C++ standard and ARM compiler lacks it? There is no #pragma to control this enum in ARM RVCT 2.2 compiler. Is there any hidden pragma available ?

    Read the article

  • Big and Little endian question

    - by Bobby
    I have the following code: // Incrementer datastores.cmtDatastores.u32Region[0] += 1; // Decrementer datastores.cmtDatastores.u32Region[1] = (datastores.cmtDatastores.u32Region[1] == 0) ? 10 : datastores.cmtDatastores.u32Region[1] - 1; // Toggler datastores.cmtDatastores.u32Region[2] = (datastores.cmtDatastores.u32Region[2] == 0x0000) ? 0xFFFF : 0x0000; The u32Region array is an unsigned int array that is part of a struct. Later in the code I convert this array to Big endian format: unsigned long *swapL = (unsigned long*)&datastores.cmtDatastores.u32Region[50]; for (int i=0;i<50;i++) { swapL[i] = _byteswap_ulong(swapL[i]); } This entire code snippet is part of a loop that repeats indefinitely. It is a contrived program that increments one element, decrements another and toggles a third element. The array is then sent via TCP to another machine that unpacks this data. The first loop works fine. After that, since the data is in big endian format, when I "increment", "decrement", and "toggle", the values are incorrect. Obviously, if in the first loop datastores.cmtDatastores.u32Region[0] += 1; results in 1, the second loop it should be 2, but it's not. It is adding the number 1(little endian) to the number in datastores.cmtDatastores.u32Region[0](big endian). I guess I have to revert back to little endian at the start of every loop, but it appears there should be an easier way to do this. Any thoughts? Thanks, Bobby

    Read the article

  • Get more debug info from AxHost?

    - by Presidenten
    Hello I'm trying to deploy an application which uses an library that embeds an ActiveX control with AxHost in C#. When I run the installed app on our test rig I catch and present the following exception: Unexpected exception. This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem. (Exception from HRESULT: 0x800736B1) at System.Windows.Forms.UnsafeNativeMethods.CoCreateInstance(Guid& clsid, Object punkOuter, Int32 context, Guid& iid) at System.Windows.Forms.AxHost.CreateWithoutLicense(Guid clsid) at System.Windows.Forms.AxHost.CreateWithLicense(String license, Guid clsid) at System.Windows.Forms.AxHost.CreateInstanceCore(Guid clsid) at System.Windows.Forms.AxHost.CreateInstance() at System.Windows.Forms.AxHost.GetOcxCreate() at System.Windows.Forms.AxHost.TransitionUpTo(Int32 state) at System.Windows.Forms.AxHost.CreateHandle() at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible) at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible) at System.Windows.Forms.AxHost.EndInit() at ....InitializeComponent() at ... I googled 0x800736B1, so I know that it means that a file could not be loaded. The big Q right now is how to find out which file it is that cant be loaded. Is there some sort of logging function I can turn on, or is there maybe som way I can get more info from the exception?

    Read the article

  • NoSuchProviderException: smtp with log4j SMTP appender

    - by user1016403
    I am using log4j to send an email when there is an exception. below is my log4j properties file configuration. log4j.rootLogger=WARN, R, email log4j.appender.R=org.apache.log4j.ConsoleAppender log4j.appender.R.layout=org.apache.log4j.PatternLayout log4j.appender.R.layout.ConversionPattern=%d{HH:mm:ss} %-5p [%c{1}]: %m%n log4j.appender.email=org.apache.log4j.net.SMTPAppender log4j.appender.email.BufferSize=10 log4j.appender.email.SMTPHost=myhost.com [email protected] [email protected] log4j.appender.email.Subject=Error log4j.appender.email.layout=org.apache.log4j.PatternLayout mine is maven project i have added dependencies for mail.jar, activation.jar and smtp.jar. But on application server startup itself i get below error: [ERROR] log4j:ERROR Error occured while sending e-mail notification. [ERROR] javax.mail.NoSuchProviderException: smtp [ERROR] at javax.mail.Session.getService(Session.java:782) [ERROR] at javax.mail.Session.getTransport(Session.java:708) [ERROR] at javax.mail.Session.getTransport(Session.java:651) [ERROR] at javax.mail.Session.getTransport(Session.java:631) [ERROR] at javax.mail.Session.getTransport(Session.java:686) [ERROR] at javax.mail.Transport.send0(Transport.java:166) Am i missing any thing here? What is the root cause of the error? is it because of incorrect SMTP host name? or is it because of any missing/conflicting dependencies?

    Read the article

  • Hello i am using the android code to connect facebook but getting "Facebook Server Error + 104 - Inc

    - by Shalini Singh
    Hello i am using the android code to connect facebook but getting "Facebook Server Error + 104 - Incorrect signature" exception at the place of onLoginSuccess function. code is given bellow .... public class FacebookConnection extends Activity implements LoginListener { private FBRocket fbRocket; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); // You need to put in your Facebook API key here: fbRocket = new FBRocket(this, "test", "e2c8deda78b007466c54f48e6359e02e"); // Determine whether there exists a previously-saved Facebook: if (fbRocket.existsSavedFacebook()) { String str =fbRocket.getAPIKey(); Log.e("Api key", str); fbRocket.loadFacebook(); } else { fbRocket.login(R.layout.main); String str =fbRocket.getAPIKey(); Log.e("Api key", str); } } public void onLoginFail() { fbRocket.displayToast("Login failed!"); fbRocket.login(R.layout.main); } public void onLoginSuccess(Facebook facebook) { fbRocket.displayToast("Login success!******************"); // Set the logged-in user's status: try { facebook.setStatus("I am using Facebook -- it's great!"); String uid = facebook.getFriendUIDs().get(0); // Just get the uid of the first friend returned... fbRocket.displayDialog("Friend's name: " + facebook.getFriend(uid).name); // ... and retrieve this friend's name. } catch (ServerErrorException e) { // Check if the exception was caused by not being logged-in: if (e.notLoggedIn()) { // ...if it was, then login again: fbRocket.login(R.layout.main); } else { System.out.println(e); e.printStackTrace(); } } }

    Read the article

  • Check for valid IMEI

    - by Tim
    Hi, does somebody knows how to check for a valid IMEI? I have found a function to check on this page: http://www.dotnetfunda.com/articles/article597-imeivalidator-in-vbnet-.aspx But it returns false for valid IMEI's (f.e. 352972024585360). I can validate them online on this page: http://www.numberingplans.com/?page=analysis&sub=imeinr What is the correct way(in VB.Net) to check if a given IMEI is valid? Regards, Tim PS: This function from above page must be incorrect in some way: Public Shared Function isImeiValid(ByVal IMEI As String) As Boolean Dim cnt As Integer = 0 Dim nw As String = String.Empty Try For Each c As Char In IMEI cnt += 1 If cnt Mod 2 <> 0 Then nw += c Else Dim d As Integer = Integer.Parse(c) * 2 ' Every Second Digit has to be Doubled nw += d.ToString() ' Genegrated a new number with doubled digits End If Next Dim tot As Integer = 0 For Each ch As Char In nw.Remove(nw.Length - 1, 1) tot += Integer.Parse(ch) ' Adding all digits together Next Dim chDigit As Integer = 10 - (tot Mod 10) ' Finding the Check Digit my Finding the Remainder of the sum and subtracting it from 10 If chDigit = Integer.Parse(IMEI(IMEI.Length - 1)) Then ' Checking the Check Digit with the last digit of the Given IMEI code Return True Else Return False End If Catch ex As Exception Return False End Try End Function

    Read the article

  • How to know about MySQL 'refused connections'

    - by celalo
    Hello, I am using MONyog to montitor my two mysql servers. I get alert emails from MONyog when something goes wrong. There is an error I could not find out why. It says: Connection History: Percentage of refused connections) - 66.67% the percentage is not important, this is just about having refused connections. I get this email every half an hour. So this is like a constant situation. This must be my mistake, because I just set up those servers and there is no chance somebody else could be interfering the servers. MONyog advices me: Try to isolate users/applications that are using an incorrect password or trying to connect from unauthorized hosts. A client will be disallowed to connect if it takes more than connect_timeout seconds to connect. Set the value of log_warnings system variable to 2. This will force the MySQL server to log further information about the error. I added log_warnings=2 to my.cnf and I enabled logging like this: [mysqld_safe] . . log_warnings=2 log-error = /var/log/mysql/error.log . . . . [mysqld_safe] . log-error=/var/log/mysqld.log . . I cannot see any warnings at /var/log/mysql/error.log I can see some warnings at /var/log/mysqld.log but they are about something else. In sum, my question is how can I detect refused connections? Please let me know if any more info is required. Thanks in advance.

    Read the article

  • What are some Java memory management best practices?

    - by Ascalonian
    I am taking over some applications from a previous developer. When I run the applications through Eclipse, I see the memory usage and the heap size increase a lot. Upon further investigation, I see that they were creating an object over-and-over in a loop as well as other things. I started to go through and do some clean up. But the more I went through, the more questions I had like "will this actually do anything?" For example, instead of declaring a variable outside the loop mentioned above and just setting its value in the loop... they created the object in the loop. What I mean is: for(int i=0; i < arrayOfStuff.size(); i++) { String something = (String) arrayOfStuff.get(i); ... } versus String something = null; for(int i=0; i < arrayOfStuff.size(); i++) { something = (String) arrayOfStuff.get(i); } Am I incorrect to say that the bottom loop is better? Perhaps I am wrong. Also, what about after the second loop above, I set "something" back to null? Would that clear out some memory? In either case, what are some good memory management best practices I could follow that will help keep my memory usage low in my applications? Update: I appreciate everyones feedback so far. However, I was not really asking about the above loops (although by your advice I did go back to the first loop). I am trying to get some best practices that I can keep an eye out for. Something on the lines of "when you are done using a Collection, clear it out". I just really need to make sure not as much memory is being taken up by these applications.

    Read the article

  • Pixel shader wierd compilation error

    - by ytrewq
    hi, I'm experiencing with shaders a bit and I keep getting this weird compilation error that's driving me crazy! the following pixel shader code snippet: DirectionVector = normalize(f3LightPosition[i] - PixelPos); LightVec = PixelNormal - DirectionVector; // Get the light strenght factor LightStrFactor = float(abs((LightVec.x + LightVec.y + LightVec.z) / 3.0f)); // TEST!!! LightStrFactor = 1.0f; // Add this light to the total light on this pixel LightVal += f4Light[i] * LightStrFactor; works perfectly, but as soon as i remove the "LightStrFactor = 1.0f;" line, i.e. letting 'LightStrFactor ' value be the result of the calculation above, it fails to compile the shader. LightStrFactor is a float LightVal & f4Light[i] are float4 All the rest are float3. my question is, besides why it doesn't compile, is how come DX compiler cares about the value of a float? even if my values are incorrect, shouldn't it be run-time? the shader compilation code is this: /* Compile the bitch */ if (FAILED(D3DXCompileShaderFromFile(fileName, NULL, NULL, "PS_MAIN", "ps_2_0", 0, &this->m_pCode, NULL, &this->m_constantTable))) GraphicException("Failed to compile pixel shader!"); // <-- gets here :( if (FAILED(g_D3dDevice->CreatePixelShader( (DWORD*)this->m_pCode->GetBufferPointer(), &this->m_hPixelShader ))) GraphicException("Failed to create pixel shader!"); this->m_fLoaded = true; any help is appreciated thanks!!! :]

    Read the article

  • Debugger ignores me.

    - by atch
    Having code: Date::Date(const char* day, const char* month, const char* year):is_leap__(false) { my_day_ = lexical_cast<int>(day); my_month_ = static_cast<Month>(lexical_cast<int>(month)); /*Have to check month here, other ctor assumes month is correct*/ if (my_month_ < 1 || my_month_ > 12) { throw std::exception("Incorrect month."); } my_year_ = lexical_cast<int>(year); if (!check_range_year_(my_year_)) { throw std::exception("Year out of range."); } if (check_leap_year_(my_year_))//SKIPS THIS LINE { is_leap__ = true; } if (!check_range_day_(my_day_, my_month_)) { throw std::exception("Day out of range."); } } bool Date::check_leap_year_(int year)const//IF I MARK THIS LINE WITH BREAKPOINT I'M GETTING MSG THAT THERE IS NO EXECUTABLE CODE IS ASSOSIATED WITH THIS LINE { if (!(year%400) || (year%100 && !(year%4))) { return true; } else { return false; } } Which is very strange in my opinion. There is call to this fnc in my code, why compiler ignores that. P.S. I'm trying to debug in release.

    Read the article

  • Asp.Net MVC2 Clientside Validation problem with controls with prefixes

    - by alexander
    The problem is: when I put 2 controls of the same type on a page I need to specify different prefixes for binding. In this case the validation rules generated right after the form are incorrect. So how to get client validation work for the case?: the page contains: <% Html.RenderPartial(ViewLocations.Shared.PhoneEditPartial, new PhoneViewModel { Phone = person.PhonePhone, Prefix = "PhonePhone" }); Html.RenderPartial(ViewLocations.Shared.PhoneEditPartial, new PhoneViewModel { Phone = person.FaxPhone, Prefix = "FaxPhone" }); %> the control ViewUserControl<PhoneViewModel>: <%= Html.TextBox(Model.GetPrefixed("CountryCode"), Model.Phone.CountryCode) %> <%= Html.ValidationMessage("Phone.CountryCode", new { id = Model.GetPrefixed("CountryCode"), name = Model.GetPrefixed("CountryCode") })%> where Model.GetPrefixed("CountryCode") just returns "FaxPhone.CountryCode" or "PhonePhone.CountryCode" depending on prefix And here is the validation rules generated after the form. They are duplicated for the field name "Phone.CountryCode". While the desired result is 2 rules (required, number) for each of the FieldNames "FaxPhone.CountryCode", "PhonePhone.CountryCode" The question is somewhat duplicate of http://stackoverflow.com/questions/2675606/asp-net-mvc2-clientside-validation-and-duplicate-ids-problem but the advise to manually generate ids doesn't helps.

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >