Search Results

Search found 22308 results on 893 pages for 'floating point'.

Page 722/893 | < Previous Page | 718 719 720 721 722 723 724 725 726 727 728 729  | Next Page >

  • Improve speed of a JOIN in MySQL

    - by ran2
    Dear all, I know there a similar threads around, but this is really the first time I realize that query speed might affect me - so it´s not that easy for me to really make the transfer from other folks problems. That being said I have using the following query successfully with smaller data, but if I use it on what are mildly large tables (about 120,000 records). I am waiting for hours. INSERT INTO anothertable (id,someint1,someint1,somevarchar1,somevarchar1) SELECT DISTINCT md.id,md.someint1,md.someint1,md.somevarchar1,pd.somevarchar1 from table1 AS md JOIN table2 AS pd ON (md.id = pd.id); Tables 1 and 2 contain about 120,000 records. The query has been running for almost 2 hours right now. Is this normal? Do I just have to wait. I really have no idea, but I am pretty sure that one could do it better since it´s my very first try. I read about indexing, but dont know yet what to index in my case? Thanks for any suggestions - feel free to point my to the very beginners guides ! best matt

    Read the article

  • What if you used the wrong language?

    - by HS
    A reply to another question made me remember a project from some years ago when it turned out that Java was not the right tool to use. I typically only learn a new language when I have a problem that it solves better than the ones I already know. [...] Then I write whatever program I wanted to learn that language for in the first place. [...] By the time I've gotten my target program written, I've usually got a decent handle on the language, not to mention any other features it has, and I've got other ideas to use it for. I did just that back then with Java, because the client thought it to be the right language to use (platform independent) and initial evaluation confirmed that. However, much later in the project there were some issue (can't really remember all the details by now). So, the project that started as a nice learning experience turned into a nightmare toward the end. I was at the brink of switching over to my trusted C++ and doing a complete rewrite. The client was not so much of a problem to convince back then, but my supervisor was strongly opposed because of all the work that already went into the Java version. In hindsight, he was right and the project was complete more or less with the intended features kind of working, but it was the project that I am least proud of by now. Long story short: what do you think, when is it too much and the switch to another technology is worthwhile? I personally would estimate the point of no return to be around 50% of the planned effort, but really want to know, if anyone has real experience with such a switch. And to answer the inevitable question: I do not really care, if the technology switched to is proven or another new thing. The latter would basically need more initial scrutiny based on the past experiences in the problematic project.

    Read the article

  • ibatis throwing NullPointerException

    - by Prashant P
    i am trying to test ibatis with DB. I get NullPointerException. Below are the class and ibatis bean config, <select id="getByWorkplaceId" parameterClass="java.lang.Integer" resultMap="result"> select * from WorkDetails where workplaceCode=#workplaceCode# </select> <select id="getWorkplace" resultClass="com.ibatis.text.WorkDetails"> select * from WorkDetails </select> POJO public class WorkplaceDetail implements Serializable { private static final long serialVersionUID = -6760386803958725272L; private int code; private String plant; private String compRegNum; private String numOfEmps; private String typeIndst; private String typeProd; private String note1; private String note2; private String note3; private String note4; private String note5; } DAOimplementation public class WorkplaceDetailImpl implements WorkplaceDetailsDAO { private SqlMapClient sqlMapClient; public void setSqlMapClient(SqlMapClient sqlMapClient) { this.sqlMapClient = sqlMapClient; } @Override public WorkplaceDetail getWorkplaceDetail(int code) { WorkplaceDetail workplaceDetail=new WorkplaceDetail(); try{ **workplaceDetail= (WorkplaceDetail) this.sqlMapClient.queryForObject("workplaceDetail.getByWorkplaceId", code);** }catch (SQLException sqlex){ sqlex.printStackTrace(); } return workplaceDetail; } TestCode public class TestDAO { public static void main(String args[]) throws Exception{ WorkplaceDetail wd = new WorkplaceDetail(126, "Hoonkee", "1234", "22", "Service", "Tele", "hsgd","hsgd","hsgd","hsgd","hsgd"); WorkplaceDetailImpl impl= new WorkplaceDetailImpl(); **impl.getWorkplaceDetail(wd.getCode());** impl.saveOrUpdateWorkplaceDetails(wd); System.out.println("dhsd"+impl); } } I want to select and insert. I have marked as ** ** as a point of exception in above code Exception in thread "main" java.lang.NullPointerException at com.ibatis.text.WorkplaceDetailImpl.getWorkplaceDetail(WorkplaceDetailImpl.java:19) at com.ibatis.text.TestDAO.main(TestDAO.java:11)

    Read the article

  • Why would I need a using statement to Libary B extn methods, if they're used in Library A & it's Li

    - by Greg
    Hi, I have: Main Program Class - uses Library A Library A - has partial classes which mix in methods from Library B Library B - mix in methods & interfaces Why would I need a using statement to LibaryB just to get their extension methods working in the main class? That is given that it's Library B that defines the classes that will be extended. EDIT - Except from code // *** PROGRAM *** using TopologyDAL; using Topology; // *** THIS WAS NEEDED TO GET EXTN METHODS APPEARING *** class Program { static void Main(string[] args) { var context = new Model1Container(); Node myNode; // ** trying to get myNode mixin methods to appear seems to need using line to point to Library B *** } } // ** LIBRARY A namespace TopologyDAL { public partial class Node { // Auto generated from EF } public partial class Node : INode<int> // to add extension methods from Library B { public int Key } } // ** LIBRARY B namespace ToplogyLibrary { public static class NodeExtns { public static void FromNodeMixin<T>(this INode<T> node) { // XXXX } } public interface INode<T> { // Properties T Key { get; } // Methods } }

    Read the article

  • Independent name of a class

    - by tobi
    We have class lua. In lua class there is a method registerFunc() which is defined: void lua::registerFun() { lua_register( luaState, "asd", luaApi::asd); lua_register( luaState, "zxc", luaApi::zxc); } lua_register is a built-in function from lua library: http://pgl.yoyo.org/luai/i/lua_register it takes static methods from luaApi class as an 3rd argument. Now some programmer wants to use the lua class, so he is forced to create his own class with definitions of the static methods, like: class luaApi { public: static int asd(); static int zxc(); }; and now is the point. I don't want (as a programmer) to create class named exactly "luaApi", but e.g. myClassForLuaApi. But for now it's not possible because it is explicitly written in the code - in lua class: lua_register( luaState, "asd", luaApi::asd); I would have to change it to: lua_register( luaState, "asd", myClassForLuaApi::asd); but I don't want to (let's assume that the programmer has no access there). If it's still not understandable, I give up. :) Thanks.

    Read the article

  • authentication question (security code generation logic)

    - by Stick it to THE MAN
    I have a security number generator device, small enough to go on a key-ring, which has a six digit LCD display and a button. After I have entered my account name and password on an online form, I press the button on the security device and enter the security code number which is displayed. I get a different number every time I press the button and the number generator has a serial number on the back which I had to input during the account set-up procedure. I would like to incorporate similar functionality in my website. As far as I understand, these are the main components: Generate a unique N digit aplha-numeric sequence during registration and assign to user (permanently) Allow user to generate an N (or M?) digit aplha-numeric sequence remotely For now, I dont care about the hardware side, I am only interested in knowing how I may choose a suitable algorithm that will allow the user to generate an N (or M?) long aplha-numeric sequence - presumably, using his unique ID as a seed Identify the user from the number generated in step 2 (which decryption method is the most robust to do this?) I have the following questions: Have I identified all the steps required in such an authentication system?, if not please point out what I have missed and why it is important What are the most robust encryption/decryption algorithms I can use for steps 1 through 3 (preferably using 64bits)?

    Read the article

  • Creating parallel selenium tests in C# and using Nunit as the runner application

    - by damianmartin
    I am writing a new test suite for the company to test a very complex ASP.NET application which is heavily AJAX driven. We have decided to use Selenium (Grid & Remote Control) and Nunit to run these tests. The actually tests are dynamically created at run time from a spreadsheet. Each Column in an excel spreadsheet relates to a new test and each row relates to a selenium command (but in plain English and the dll converts this into Selenium code). My problem i have at the moment is getting the tests running in parallel. There will be 1000+ tests so it is too time consuming to have 1 test run at a time. Selenium Grid and Selenium Remote Control(s) are setup correctly because I can run there demo. From what i have read I need to use Punit but i can not find any documentation on what a test in punit should look like. Nunit tests are [SetUp] [TearDown] [Test]. Can anyone point me in the right direction. Thanks in advance.

    Read the article

  • What is the Proper approach for Constructing a PhysicalAddress object from Byte Array

    - by Paul Farry
    I'm trying to understand what the correct approach for a constructor that accepts a Byte Array with regard to how it stores it's data (specifically with PhysicalAddress) I have an array of 6 bytes (theAddress) that is constructed once. I have a source array of 18bytes (theAddresses) that is loaded from a TCP Connection. I then copy the 6bytes from theAddress+offset into theAddress and construct the PhysicalAddress from it. Problem is that the PhysicalAddress just stores the Reference to the array that was passed in. Therefore if you subsequently check the addresses they only ever point to the last address that was copied in. When I took a look inside the PhysicalAddress with reflector it's easy to see what's going on. public PhysicalAddress(byte[] address) { this.changed = true; this.address = address; } Now I know this can be solved by creating theAddress array on each pass, but I wanted to find out what really is the best practice for this. Should the constructor of an object that accepts a byte array create it's own private Variable for holding the data and copy it from the original Should it just hold the reference to what was passed in. Should I just created theAddress on each pass in the loop

    Read the article

  • How to convert model into url properly in asp.net MVC?

    - by 4eburek
    From the SEO standpoint it is nice to see urls in format which explains what is located on a page Let's have a look on such situation (it is just example) We need to display page about some user and decided to have such url template for that page: /user/{UserId}/{UserCountry}/{UserLogin}. And create for this purpose such model public class UserUrlInfo{ public int UserId{get;set;} public string UserCountry{get;set;} public string UserLogin{get;set;} } I want to create controller method where I pass UserUrlInfo object but not all required fields. Classic controller method for url template shown above is following public ActionResult Index(int UserId, string UserCountry, string UserLogin){ return View(); } and we need to call it like that Html.ActionLink<UserController>(x=>Index(user.UserId, user.UserCountry, user.UserLogin), "See user page") I want to create such controller method public ActionResult Index(UserUrlInfo userInfo){ return View(); } and call it like that: Html.ActionLink<UserController>(x=>Index(user), "See user page") Actually I works when we add one more route and point it to the same controller method, so routing will be: /user/{userInfo} /user/{UserId}/{UserCountry}/{UserLogin} In this situation routing engine gets string method of our model (need to override it) and it works ALMOST always. But sometimes it fails and show url like /page/?userInfo=/US/John So my workaround does not always work properly. Does anybody know how to work with urls in such way?

    Read the article

  • wxGraphicsContext dreadfully slow on Windows

    - by Jonatan
    I've implemented a plotter using wxGraphicsContext. The development was done using wxGTK, and the graphics was very fast. Then I switched to Windows (XP) using wxWidgets 2.9.0. And the same code is extremely slow. It takes about 350 ms to render a frame. Since the user is able to drag the plotter with the mouse to navigate it feels very sluggish with such a slow update rate. I've tried to implement some parts using wxDC and benchmarked the difference. With wxDC the code runs just about 100 times faster. As far as I know both Cairo and GDI+ are implemented in software at this point, so there's no real reason Cairo should be so much faster than GDI+. Am I doing something wrong? Or is the GDI+ implementation just not up on par with Cairo? One small note: I'm rendering to a wxBitmap now, with the wxGraphicsContext created from a wxMemoryDC. This is to avoid flicker on XP, since double buffering doesn't work there.

    Read the article

  • Why open source it? And how to get real involvement?

    - by donpal
    For me the main goal of open sourcing something is collaboration. If the most that other developers are going to do is take it and use it and report bugs to me, then I might as well close source it. Closed source provides me with all that. I was recently looking at a small javascript library (or more like a plugin, 1000 lines of code) that's actually quite popular. There were some bugs in it because new browsers and browser versions get released everyday and these bugs just pop up as a result. What bothered me is that these bugs would actually be quite easy to fix by even intermediate javascript developers, but for an entire month no one stepped up to fix the bug and submit the fixed version. The original author was apparently busy for that month, but that's the point of open sourcing your code: so that others can use it and help themselves AND the project if they can. So this makes me doubt the promise of open source. If people aren't working on it too, I might as well close source my new projects. And how do you get people involved so that open sourcing is worth it?

    Read the article

  • Mulit-dimensional array edge/border conditions

    - by kirbuchi
    Hi, I'm iterating over a 3 dimensional array (which is an image with 3 values for each pixel) to apply a 3x3 filter to each pixel as follows: //For each value on the image for (i=0;i<3*width*height;i++){ //For each filter value for (j=0;j<9;j++){ if (notOutsideEdgesCondition){ *(**(outArray)+i)+= *(**(pixelArray)+i-1+(j%3)) * (*(filter+j)); } } } I'm using pointer arithmetic because if I used array notation I'd have 4 loops and I'm trying to have the least possible number of loops. My problem is my notOutsideEdgesCondition is getting quite out of hands because I have to consider 8 border cases. I have the following handled conditions Left Column: ((i%width)==0) && (j%3==0) Right Column: ((i-1)%width ==0) && (i>1) && (j%3==2) Upper Row: (i<width) && (j<2) Lower Row: (i>(width*height-width)) && (j>5) and still have to consider the 4 corner cases which will have longer expressions. At this point I've stopped and asked myself if this is the best way to go because If I have a 5 line long conditional evaluation it'll not only be truly painful to debug but will slow the inner loop. That's why I come to you to ask if there's a known algorithm to handle this cases or if there's a better approach for my problem. Thanks a lot.

    Read the article

  • How do I retrieve an automated report and save it to a database?

    - by Mason Wheeler
    I've got a web server that will take scripts in Python, PHP or Perl. I don't know much about any of those languages, but of the three, Python seems the least scary. It has a MySql database set up, and I know enough SQL to manage it and write queries for it. I also have a program that I want to add automated error reporting to. Something goes wrong, it sends a bug report to my server. What I don't know how to do is write a Python script that will sit on the web server and, when my program sends in a bug report, do the following: Receive the bug report. Parse it out into sections. Insert it into the database. Have the server send me an email. From what little I understand, this seems like it shouldn't be too difficult if I only knew what I was doing. Could someone point me to a site that explains the basic principles I'd need to create a script like this?

    Read the article

  • Fix hard-coded display setting without source (24-bit, need 32-bit)

    - by FerretallicA
    I wrote a program about 10 years ago in Visual Basic 6 which was basically a full-screen game similar to Breakout / Arkanoid but had 'demoscene'-style backgrounds. I found the program, but not the source code. Back then I hard-coded the display mode to 800x600x24, and the program crashes whenever I try to run it as a result. No virtual machine seems to support 24-bit display when the host display mode is 16/32-bit. It uses DirectX 7 so DOSBox is no use. I've tried all sorts of decompiler and at best they give me the form names and a bunch of assembly calls which mean nothing to me. The display mode setting was a DirectX 7 call but there's no clear reference to it in the decompilation. In this situation, is there any pointers on how I can: pin-point the function call in the program which is setting the display mode to 800x600x24 (ResHacker maybe?) and change the value being passed to it so it sets 800x600x32 view/intercept DirectX calls being made while it's running or if that's not possible, at least run the program in an environment that emulates a 24-bit display I don't need to recover the source code (as nice as it would be) so much as just want to get it running.

    Read the article

  • Impossible to remove directory

    - by Mark
    Evidently I've never had to delete a directory using win32 sdk before, because its apparently an impossible task. I've tried anything and everything - RemoveDirectory, SHFileOperation with FO_DELETE, etc. Currently I call CreateDirectory in one thread, start another thread, copy some files to this directory in the new thread, then delete all the files in the directory in the new thread, and then back in the original thread that created the directory, try to delete the now empty directory and it fails. The directory really and truly is empty when I try to delete it, but it makes no difference. The whole thread aspect is irrelevant I think because at one point everything was in one thread and it didn't work. I'm currently setting a SecurityAttributes structure on CreateDirectory to grant access to everyone, but it makes no difference. RemoveDirectory in the past has returned '32' on GetLastError, which I believe is Sharing violation. But even if I just try to delete the empty directory from the command line, it refuses saying, "The process cannot access the file because it is being used by another process." until I shut down the entire application that created the directory. (Note: the directory is created in GetTempPath.)

    Read the article

  • OpenMP - running things in parallel and some in sequence within them

    - by Sayan Ghosh
    Hi, I have a scenario like: for (i = 0; i < n; i++) { for (j = 0; j < m; j++) { for (k = 0; k < x; k++) { val = 2*i + j + 4*k if (val != 0) { for(t = 0; t < l; t++) { someFunction((i + t) + someFunction(j + t) + k*t) } } } } } Considering this is block A, Now I have two more similar blocks in my code. I want to put them in parallel, so I used OpenMP pragmas. However I am not able to parallelize it, because I am a tad confused that which variables would be shared and private in this case. If the function call in the inner loop was an operation like sum += x, then I could have added a reduction clause. In general, how would one approach parallelizing a code using OpenMP, when we there is a nested for loop, and then another inner for loop doing the main operation. I tried declaring a parallel region, and then simply putting pragma fors before the blocks, but definitely I am missing a point there! Thanks, Sayan

    Read the article

  • Application.ProcessMessages hangs???

    - by X-Ray
    my single threaded delphi 2009 app (not quite yet complete) has started to have a problem with Application.ProcessMessages hanging. my app has a TTimer object that fires every 100 ms to poll an external device. i use Application.ProcessMessages to update the screen when something changes so the app is still responsive. one of these was in a grid OnMouseDown event. in there, it had an Application.ProcessMessages that essentially hung. removing that was no problem except that i soon discovered another Application.ProcessMessages that was also blocking. i think what may be happening to me is that the TTimer is--in the app mode i'm currently debugging--probably taking too long to complete. i have prevented the TTimer.OnTimer event hander from re-entering the same code (see below): procedure TfrmMeas.tmrCheckTimer(Sender: TObject); begin if m_CheckTimerBusy then exit; m_CheckTimerBusy:=true; try PollForAndShowMeasurements; finally m_CheckTimerBusy:=false; end; end; what places would it be a bad practice to call Application.ProcessMessages? OnPaint routines springs to mind as something that wouldn't make sense. any general recommendations? i am surprised to see this kind of problem arise at this point in the development!

    Read the article

  • How to create platform independent 3D video on 3D TV via HDMI 1.4?

    - by artif
    I am writing a real-time, interactive 3D visualization program and at each point in the program, I can compute 2 images (bitmaps) that are meant to look 3D together by means of stereoscopy. How do I get my program to display the image pairs such that they look 3D on a 3D TV? Is there a platform independent way of accomplishing it? (By platform I mean independent of GPU brand, operating system, 3D TV vendor, etc.) If not, which is preferable-- to lock in by GPU, OS, or 3D TV? I suppose I need to be using an HDMI 1.4 cable with the 3D TV? HDMI 1.4 can encode stereoscopy via side-by-side method. But how do I send such an encoded signal to the monitor? What kind of libraries do I use for this sort of thing? Windows DirectShow? If DirectShow is correct, is there a cross platform equivalent available? If anyone asks, yes I have seen this question: http://stackoverflow.com/questions/2811350/generating-3d-tv-stereoscopic-output-programmatically. However, correct me if I am wrong, it does not appear to be what I'm looking for. I do not have an OpenGL or Direct3D program that generates polygons, for which a Nvidia card can do ad-hoc impromptu stereoscopy simply by rendering the scene from 2 slightly offset points of view and then displaying those 2 images on the monitor-- my program already has those image pairs and needs to display them (and they are not the result of rendering polygons). Btw, I have never done any major multimedia programming before and know very little about HDMI, Direct Show, 3D TVs, etc so pardon me if any parts of this question did not make any sense at all.

    Read the article

  • How to calculate this string-dissimilarity function efficiently?

    - by ybungalobill
    Hello, I was looking for a string metric that have the property that moving around large blocks in a string won't affect the distance so much. So "helloworld" is close to "worldhello". Obviously Levenshtein distance and Longest common subsequence don't fulfill this requirement. Using Jaccard distance on the set of n-grams gives good results but has other drawbacks (it's a pseudometric and higher n results in higher penalty for changing single character). [original research] As I thought about it, what I'm looking for is a function f(A,B) such that f(A,B)+1 equals the minimum number of blocks that one have to divide A into (A1 ... An), apply a permutation on the blocks and get B: f("hello", "hello") = 0 f("helloworld", "worldhello") = 1 // hello world -> world hello f("abba", "baba") = 2 // ab b a -> b ab a f("computer", "copmuter") = 3 // co m p uter -> co p m uter This can be extended for A and B that aren't necessarily permutations of each other: any additional character that can't be matched is considered as one additional block. f("computer", "combuter") = 3 // com uter -> com uter, unmatched: p and b. Observing that instead of counting blocks we can count the number of pairs of indices that are taken apart by a permutation, we can write f(A,B) formally as: f(A,B) = min { C(P) | P:|A|?|B|, P is bijective, ?i?dom(P) A[P(i)]=B[P(i)] } C(P) = |A| + |B| - |dom(P)| - |{ i | i,i+1?dom(P) and P(i)+1=P(i+1) }| - 1 The problem is... guess what... ... that I'm not able to calculate this in polynomial time. Can someone suggest a way to do this efficiently? Or perhaps point me to already known metric that exhibits similar properties?

    Read the article

  • Should we deploy a Webkit browser for our intranet applications?

    - by Jeff Meatball Yang
    At my place of employment, we are increasingly finding it difficult to develop for IE, which was historically the easiest browser to target, from an intranet-app point of view. It was already deployed. It already understood NTLM authentication, thus well integrated with our domain-level security. It had neat, albeit non-standard features such as XMLDOM and XmlHTTP. Now, we are increasingly irritated by issues presented by IE: There are several versions: IE 7, 8, and soon 9 beta, which all have slightly different issues related to performance, functionality (especially re:security and zones), and aesthetics. IE 7 and 8 are slower than Webkit-based browsers. Period. There are technology limitations such as missing canvas element, CSS bugs, etc. that make it hard to use 3rd party packages or even consistently write code across IE versions. Users are increasingly using Firefox or Chrome, even for intranet use. Does anyone have experience with making a transition? Any advice would be welcome.

    Read the article

  • ~1 second TcpListener Pending()/AcceptTcpClient() lag

    - by cpf
    Probably just watch this video: http://screencast.com/t/OWE1OWVkO As you see, the delay between a connection being initiated (via telnet or firefox) and my program first getting word of it. Here's the code that waits for the connection public IDLServer(System.Net.IPAddress addr,int port) { Listener = new TcpListener(addr, port); Listener.Server.NoDelay = true;//I added this just for testing, it has no impact Listener.Start(); ConnectionThread = new Thread(ConnectionListener); ConnectionThread.Start(); } private void ConnectionListener() { while (Running) { while (Listener.Pending() == false) { System.Threading.Thread.Sleep(1); }//this is the part with the lag Console.WriteLine("Client available");//from this point on everything runs perfectly fast TcpClient cl = Listener.AcceptTcpClient(); Thread proct = new Thread(new ParameterizedThreadStart(InstanceHandler)); proct.Start(cl); } } (I was having some trouble getting the code into a code block) I've tried a couple different things, could it be I'm using TcpClient/Listener instead of a raw Socket object? It's not a mandatory TCP overhead I know, and I've tried running everything in the same thread, etc.

    Read the article

  • Impossible to be const-correct when combining data and it's lock?

    - by Graeme
    I've been looking at ways to combine a piece of data which will be accessed by multiple threads alongside the lock provisioned for thread-safety. I think I've got to a point where I don't think its possible to do this whilst maintaining const-correctness. Take the following class for example: template <typename TType, typename TMutex> class basic_lockable_type { public: typedef TMutex lock_type; public: template <typename... TArgs> explicit basic_lockable_type(TArgs&&... args) : TType(std::forward<TArgs...>(args)...) {} TType& data() { return data_; } const TType& data() const { return data_; } void lock() { mutex_.lock(); } void unlock() { mutex_.unlock(); } private: TType data_; mutable TMutex mutex_; }; typedef basic_lockable_type<std::vector<int>, std::mutex> vector_with_lock; In this I try to combine the data and lock, marking mutex_ as mutable. Unfortunately this isn't enough as I see it because when used, vector_with_lock would have to be marked as mutable in order for a read operation to be performed from a const function which isn't entirely correct (data_ should be mutable from a const). void print_values() const { std::lock_guard<vector_with_lock>(values_); for(const int val : values_) { std::cout << val << std::endl; } } vector_with_lock values_; Can anyone see anyway around this such that const-correctness is maintained whilst combining data and lock? Also, have I made any incorrect assumptions here?

    Read the article

  • Have I found a security problem in an API or do I just not understand SSL?

    - by jamieb
    I'm working on building a set of Python bindings around an XML-based API provided by a vendor. The vendor requires that all transactions be conducted over SSL. Using a Linux box, I created a key file and a CSR for my application. Using their self-service web portal, I then generate a certificate using that CSR. Both the key file and the certificate are used when making the SSL request to the API. I'm now working on designing exception classes to make error messages more verbose (and, hopefully, more useful to developers using my bindings). Part of my testing has included altering the key file: transpose a couple characters here, replace 4 or 5 with random characters there, etc. To my surprise, altering the key file had no effect! As long as I didn't change the total length of it, the API didn't complain about a bad key file. The only way I was able to throw an error was by swapping in a completely different key from another application. At that point, the API complained about the Common Name not matching. Is this normal behavior or has the vendor not properly implemented SSL?

    Read the article

  • Apache and multiple tomcats proxy

    - by Sebb77
    I have 1 apache server and two tomcat servers with two different applications. I want to use the apache as a proxy so that the user can access the application from the same url using different paths. e.g.: localhost/app1 --> localhost:8080/app1 localhost/app2 --> localhost:8181/app2 I tried all 3 mod proxy of apache (mod_jk, mod_proxy_http and mod_proxy_ajp) but the first application is working, whilst the second is not accessible. This is the apache configuration I'm using: ProxyPassMatch ^(/.*\.gif)$ ! ProxyPassMatch ^(/.*\.css)$ ! ProxyPassMatch ^(/.*\.png)$ ! ProxyPassMatch ^(/.*\.js)$ ! ProxyPassMatch ^(/.*\.jpeg)$ ! ProxyPassMatch ^(/.*\.jpg)$ ! ProxyRequests Off ProxyPass /app1 ajp://localhost:8009/ ProxyPassReverse /app1 ajp://localhost:8009/ ProxyPass /app2 ajp://localhost:8909/ ProxyPassReverse /app2 ajp://localhost:8909/ With the above, I manage to view the tomcat root application using localhost/app1, but I get "Service Temporarily Unavailable" (apache error) when accessing app2. I need to keep the tomcat servers separate because I need to restart one of the applications often and it is not an option to save both apps on the same tomcat. Can someone point me out what I'm doing wrong? Thank you all.

    Read the article

  • Vertex Buffers in opengl

    - by JB
    I'm making a small 3d graphics game/demo for personal learning. I know d3d9 and quite a bit about d3d11 but little about opengl at the moment so I'm intending to abstract out the actual rendering of the graphics so that my scene graph and everything "above" it needs to know little about how to actually draw the graphics. I intend to make it work with d3d9 then add d3d11 support and finally opengl support. Just as a learning exercise to learn about 3d graphics and abstraction. I don't know much about opengl at this point though, and don't want my abstract interface to expose anything that isn't simple to implement in opengl. Specifically I'm looking at vertex buffers. In d3d they are essentially an array of structures, but looking at the opengl interface the equivalent seems to be vertex arrays. However these seem to be organised rather differently where you need a separate array for vertices, one for normals, one for texture coordinates etc and set the with glVertexPointer, glTexCoordPointer etc. I was hoping to be able to implement a VertexBuffer interface much like the the directx one but it looks like in d3d you have an array of structures and in opengl you need a separate array for each element which makes finding a common abstraction quite hard to make efficient. Is there any way to use opengl in a similar way to directx? Or any suggestions on how to come up with a higher level abstraction that will work efficiently with both systems?

    Read the article

< Previous Page | 718 719 720 721 722 723 724 725 726 727 728 729  | Next Page >