Search Results

Search found 7019 results on 281 pages for 'adaptive systems'.

Page 235/281 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Reading numpy arrays outside of Python

    - by Abiel
    In a recent question I asked about the fastest way to convert a large numpy array to a delimited string. My reason for asking was because I wanted to take that plain text string and transmit it (over HTTP for instance) to clients written in other programming languages. A delimited string of numbers is obviously something that any client program can work with easily. However, it was suggested that because string conversion is slow, it would be faster on the Python side to do base64 encoding on the array and send it as binary. This is indeed faster. My question now is, (1) how can I make sure my encoded numpy array will travel well to clients on different operating systems and different hardware, and (2) how do I decode the binary data on the client side. For (1), my inclination is to do something like the following import numpy as np import base64 x = np.arange(100, dtype=np.float64) base64.b64encode(x.tostring()) Is there anything else I need to do? For (2), I would be happy to have an example in any programming language, where the goal is to take the numpy array of floats and turn them into a similar native data structure. Assume we have already done base64 decoding and have a byte array, and that we also know the numpy dtype, dimensions, and any other metadata which will be needed. Thanks.

    Read the article

  • how useful is Turing completeness? are neural nets turing complete?

    - by Albert
    While reading some papers about the Turing completeness of recurrent neural nets (for example: Turing computability with neural nets, Hava T. Siegelmann and Eduardo D. Sontag, 1991), I got the feeling that the proof which was given there was not really that practical. For example the referenced paper needs a neural network which neuron activity must be of infinity exactness (to reliable represent any rational number). Other proofs need a neural network of infinite size. Clearly, that is not really that practical. But I started to wonder now if it does make sense at all to ask for Turing completeness. By the strict definition, no computer system nowadays is Turing complete because none of them will be able to simulate the infinite tape. Interestingly, programming language specification leaves it most often open if they are turing complete or not. It all boils down to the question if they will always be able to allocate more memory and if the function call stack size is infinite. Most specification don't really specify this. Of course all available implementations are limited here, so all practical implementations of programming languages are not Turing complete. So, what you can say is that all computer systems are just equally powerful as finite state machines and not more. And that brings me to the question: How useful is the term Turing complete at all? And back to neural nets: For any practical implementation of a neural net (including our own brain), they will not be able to represent an infinite number of states, i.e. by the strict definition of Turing completeness, they are not Turing complete. So does the question if neural nets are Turing complete make sense at all? The question if they are as powerful as finite state machines was answered already much earlier (1954 by Minsky, the answer of course: yes) and also seems easier to answer. I.e., at least in theory, that was already the proof that they are as powerful as any computer.

    Read the article

  • Looking for Bugtracker with specific features

    - by Thorsten Dittmar
    Hi, we're looking into bugtracking systems at our firm. We're quite small (4 developers only). On the other hand we have quite a large number of customers we develop individual software for. Most software is built explicitly for one customer, apart from two or three standard tools we ship. To make support easier for us (and to avoid being interrupted by phone calls all the time) we're looking for a bugtracker that must support a specific set of features. We want the customers to report bugs/feature/change requests themselves and be notified about these reports by email. Then we'd like to track what we've done and how much time it took, notifying the customer about that per email (private notes for just us must be possible). At the end of the month we'd like to bill all closed reports according to the time it took to solve/implement them. The following must be possible: It must have a web based interface where the users must log in with credentials we provide. The users must not be able to create accounts themselves/we must be able to turn off such a feature. We must be able to configure projects and assign customer logins to these projects. The customers must only see projects they are assigned to, not any other projects. Also, customers must not "see" other customers. We would name the projects, so that standard tools are listed as separate projects for each customer. A monthly report must be available that we can use to get information about the requests we worked on per customer. I'd like to introduce some standard product like Mantis (I've played with that a little, but didn't quite figure out whether it provides all the features I listed above). The product should be Open Source and work on a XAMPP Windows Server 2003 environment. Does anybody have any good suggestions?

    Read the article

  • Someone selling my GPL theme

    - by PaKK
    I'm having a tough time trying to figure out the best way to handle this. I've created a few themes last year, released them under a GPL license, and pretty much forgot about them. My goal was to put them out publicly as samples of my work. I've recently come across a site and was shocked they are selling one of my themes among several other themes (not mine) and other support and package systems. Anyway needless to say I'm not happy about this. I did not intend for those themes to be sold, and if they are to be sold, at least I would expect a percentage of those sales. I contacted the website asking how many they sold and that I'm the author of one of the themes they were selling. I eventually received a reply that my theme is a GPL theme and that this license allows them to sell it without compensation to me. WTF? Just the way the reply was worded pissed me off. There is no way to comment on the site to inform possible buyers that those themes can be downloaded from my site. What can I do about this? I realize now it was a bad choice to release them under that license. Is it possible to take back the theme from public distribution or is it out forever. Can I change it from GPL to another license at this point? Will that be sufficient to stop the sale of my theme in the future? Any insights are appreciated.

    Read the article

  • What is the the relation between programming and mathematics?

    - by Math Grad
    Programmers seem to think that their work is quite mathematical. I understand this when you try to optimize something in performance, find the most efficient alogithm, etc.. But it patently seems false when you look at a billing application for a shop, or a systems software riddled with I/O calls. So what is it exactly? Is computation and associated programming really mathematical? Here I have in mind particularly the words of the philosopher Schopenhauer in mind: That arithmetic is the basest of all mental activities is proved by the fact that it is the only one that can be accomplished by means of a machine. Take, for instance, the reckoning machines that are so commonly used in England at the present time, and solely for the sake of convenience. But all analysis finitorum et infinitorum is fundamentally based on calculation. Therefore we may gauge the “profound sense of the mathematician,” of whom Lichtenberg has made fun, in that he says: “These so-called professors of mathematics have taken advantage of the ingenuousness of other people, have attained the credit of possessing profound sense, which strongly resembles the theologians’ profound sense of their own holiness.” I lifted the above quote from here. It seems that programmers are doing precisely the sort of mechanized base mental activity the grand old man is contemptuous about. So what exactly is the deal? Is programming really the "good" kind of mathematics, or just the baser type, or altogether something else just meant for business not to be confused with a pure discipline?

    Read the article

  • How to provide js-ctypes in a spidermonkey embedding?

    - by Triston J. Taylor
    Summary I have looked over the code the SpiderMonkey 'shell' application uses to create the ctypes JavaScript object, but I'm a less-than novice C programmer. Due to the varying levels of insanity emitted by modern build systems, I can't seem to track down the code or command that actually links a program with the desired functionality. method.madness This js-ctypes implementation by The Mozilla Devs is an awesome addition. Since its conception, scripting has been primarily used to exert control over more rigorous and robust applications. The advent of js-ctypes to the SpiderMonkey project, enables JavaScript to stand up and be counted as a full fledged object oriented rapid application development language flying high above 'the bar' set by various venerable application development languages such as Microsoft's VB6. Shall we begin? I built SpiderMonkey with this config: ./configure --enable-ctypes --with-system-nspr followed by successful execution of: make && make install The js shell works fine and a global ctypes javascript object was verified operational in that shell. Working with code taken from the first source listing at How to embed the JavaScript Engine -MDN, I made an attempt to instantiate the JavaScript ctypes object by inserting the following code at line 66: /* Populate the global object with the ctypes object. */ if (!JS_InitCTypesClass(cx, global)) return NULL; /* I compiled with: g++ $(./js-config --cflags --libs) hello.cpp -o hello It compiles with a few warnings: hello.cpp: In function ‘int main(int, const char**)’: hello.cpp:69:16: warning: converting to non-pointer type ‘int’ from NULL [-Wconversion-null] hello.cpp:80:20: warning: deprecated conversion from string constant to ‘char*’ [-Wwrite-strings] hello.cpp:89:17: warning: NULL used in arithmetic [-Wpointer-arith] But when you run the application: ./hello: symbol lookup error: ./hello: undefined symbol: JS_InitCTypesClass Moreover JS_InitCTypesClass is declared extern in 'dist/include/jsapi.h', but the function resides in 'ctypes/CTypes.cpp' which includes its own header 'CTypes.h' and is compiled at some point by some command during 'make' to yeild './CTypes.o'. As I stated earlier, I am less than a novice with the C code, and I really have no idea what to do here. Please give or give direction to a generic example of making the js-ctypes object functional in an embedding.

    Read the article

  • Distributed Message Ordering

    - by sbanwart
    I have an architectural question on handling message ordering. For purposes of this question, the transport is irrelevant, so I'm not going to specify one. Say we have three systems, a website, a CRM and an ERP. For this example, the ERP will be the "master" system in terms of data ownership. The website and the CRM can both send a new customer message to the ERP system. The ERP system then adds a customer and publishes the customer with the newly assigned account number so that the website and CRM can add the account number to their local customer records. This is a pretty straight forward process. Next we move on to placing orders. The account number is required in order for the CRM or website to place an order with the ERP system. However the CRM will permit the user to place an order even if the customer lacks an account number. (For this example assume we can't modify the CRM behavior) This creates the possibility that a user could create a new customer, and place an order before the account number gets updated in the CRM. What is the best way to handle this scenario? Would it be best to send the order message sans account number and let it go to an error queue? Would it be better to have the CRM endpoint hold the message and wait until the account number is updated in the CRM? Maybe something completely different that I haven't thought of? Thanks in advance for any help.

    Read the article

  • How to make gcc on SUN calculate floating points the same way as in Linux

    - by Marina
    I have a project where I have to perform some mathematics calculations with double variables. The problem is that I get different results on SUN Solaris 9 and Linux. There are a lot of ways (explained here and other forums) how to make Linux work as Sun, but not the other way around. I cannot touch the Linux code, so it is only SUN I can change. Is there any way to make SUN to behave as Linux? The code I run(compile with gcc on both systems): int hash_func(char *long_id) { double product, lnum, gold; while (*long_id) lnum = lnum * 10.0 + (*long_id++ - '0'); printf("lnum => %20.20f\n", lnum); lnum = lnum * 10.0E-8; printf("lnum => %20.20f\n", lnum); gold = 0.6125423371582974; product = lnum * gold; printf("product => %20.20f\n", product); ... } if the input is 339886769243483 the output in Linux: lnum => 339886769243**483**.00000000000000000000 lnum => 33988676.9243**4829473495483398** product => 20819503.600158**59827399253845** When on SUN: lnum => 339886769243483.00000000000000000000 lnum => 33988676.92434830218553543091 product = 20819503.600158**60199928283691** Note: The result is not always different, moreover most of the times it is the same. Just 10 15-digit numbers out of 60000 have this problem. Please help!!!

    Read the article

  • Python: Beginning problems

    - by Blogger
    ok so basically i very new to programming and have no idea how to go about these problems help if you will ^^ Numerologists claim to be able to determine a person’s character traits based on the “numeric value” of a name. The value of a name is determined by summing up the values of the letters of the name, where ‘a’ is 1, ‘b’ is 2, ‘c’ is 3 etc., up to ‘z’ being 26. For example, the name “Zelle” would have the value 26 + 5 + 12 + 12 + 5 = 60 (which happens to be a very suspicious number, by the way). Write a program that calculates the numeric value of a single name provided as input. Word count. A common utility on Unix/Linux systems is a small program called “wc”. This program counts the number of lines, words (strings of characters separated by blanks, tabs, or new lines), and characters in a file. Write your own version of this program. The program should accept a file name as input and then print three numbers showing the count of lines, words, and characters in the file.

    Read the article

  • Any info about book "Unix Internals: The New Frontiers" by Uresh Vahalia 2nd edition (Jan 2010)

    - by claws
    This summer I'm getting into UNIX (mostly *BSD) development. I've graduate level knowledge about operating systems. I can also understand the code & read from here and there but the thing is I want to make most of my time. Reading books are best for this. From my search I found that these two books The Design and Implementation of the 4.4 BSD Operating System "Unix Internals: The New Frontiers" by Uresh Vahalia are like established books on UNIX OS internals. But the thing is these books are pretty much outdated. yay!! Lucky me. "Unix Internals: The New Frontiers" by Uresh Vahalia 2 edition (Jan 2010) is released. I've been search for information on this book. Sadly, Amazon says "Out of Print--Limited Availability" & I couldn't find any info regarding this book. This is the information I'm looking for: Table of Contents Whats new in this edition? Where the hell can I buy soft-copy of this book? I really cannot afford buying a hardcopy. How can I contact the author? I've lot of hopes & expectations on this book. I've been waiting for its release for a long time. I've sent random mails to & & requesting to have a proper website for this book. I even contacted publisher for any further information but no replies from any one. If you have any other books that you think will help me. I again repeat, I want to get max possible out of these 2.5 months summer.

    Read the article

  • How can I visualise a "broken" hierarchical dataset?

    I have a reasonably large datatable structured something like this: StaffNo Grade Direct Boss2 Boss3 Boss4 Boss5 Boss6 ------- ----- ----- ----- ----- ----- ----- ----- 10001 1 10002 10002 10057 10094 10043 10099 10002 2 10057 NULL 10057 10094 10043 10099 10003 1 10004 10004 10057 10094 10043 10099 10004 2 10057 NULL 10057 10094 10043 10099 10057 3 10094 NULL NULL 10094 10043 10099 etc.... i.e. a unique id , their level (grade) in the hierarchy, a record of their bosses ID and the IDs of the supervisors above. (The 2,3,4, etc refers to the boss at that particular grade). The system relies on a strict hierarchy - if you are my boss (/parent) then your boss must be my grandparent. Unfortunately this rule is not enforced within the data model and the data ultimately comes from other systems which don't even know about the rule, let alone observe it. So you and I may share the same boss, but our bosses boss won't be the same. note: I cannot change the data model I cannot fix the data at source. So (for the moment) I have to fix the data once it's in place. Once a fortnight someone will do something which breaks the model and I'll need to modify the procs slightly to resolve. Not ideal, but I'm stuck with this for the next six months. Anyway, specific queries are easy to produce but I find it hard to keep track of the bigger picutre. The application which sits on this runs without complaint regardless but navigating around the system becoming extraordinarily confusing. So my question is: Can anyone recommend a tool (or technique) for generating some kind of "broken tree" diagram in this sort of circumstances? I don't want something that will fix things for me, or attempt statistical analysis but at least something that will give a visual indication of how broken it is at any one time. Note : At the moment this is in a SQL Server database but I'm open to ideas utilising C#, Perl or Python.

    Read the article

  • One config file for multiple environments

    - by ho
    I'm currently working with systems that has quite a lot of configuration settings that are environment specific (Dev, UAT, Production). Does anyone have any suggestions for minimizing the changes needed to the config file when moving between environments as well as minimizing the duplication of data in the config file? It's mostly Application settings rather than User settings. The way I'm doing it at the moment is something similar to this: <DevConnectionString>xyz</DevConnectionString> <DevInboundPath>xyz</DevInboundPath> <DevProcessedPath>xyz</DevProcessedPath> <UatConnectionString>xyz</UatConnectionString> <UatInboundPath>xyz</UatInboundPath> <UatProcessedPath>xyz</UatProcessedPath> ... <Environment>Dev</Environment> And then I have a class that reads in the Environment setting via the My.Settings class (it's VB project) and then uses that to decide what other settings to retrieve. This leads to too much duplication though so I'm not sure if it's worth it.

    Read the article

  • How do tools like Hiphop for PHP deal with heterogenous arrays?

    - by Derek Thurn
    I think HipHop for PHP is an interesting tool. It essentially converts PHP code into C++ code. Cross compiling in this manner seems like a great idea, but I have to wonder, how do they overcome the fundamental differences between the two type systems? One specific example of my general question is heterogeneous data structures. Statically typed languages don't tend to let you put arbitrary types into an array or other container because they need to be able to figure out the types on the other end. If I have a PHP array like this: $mixedBag = array("cat", 42, 8.5, false); How can this be represented in C++ code? One option would be to use void pointers (or the superior version, boost::any), but then you need to cast when you take stuff back out of the array... and I'm not at all convinced that the type inferencer can always figure out what to cast to at the other end. A better option, perhaps, would be something more like a union (or boost::variant), but then you need to enumerate all possible types at compile time... maybe possible, but certainly messy since arrays can contain arbitrarily complex entities. Does anyone know how HipHop and similar tools which go from a dynamic typing discipline to a static discipline handle these types of problems?

    Read the article

  • Is Git ready to be recommended to my boss?

    - by Mike Weller
    I want to recomment Git to my boss as a new source control system, since we're stuck in the 90s with VSS (ouch), but are the tools and 3rd party support good enough yet? Specifically I'm talking about GUI front-ends similar to TortoiseSVN, decent visual diff/merge support, as well as stuff like email commit notifications and general support from 3rd parties like IDEs and build systems. Even though this will be used by programmers, we really need this kind of stuff in our team. I don't want to leave everyone stuck with a new tool, and even a new source control paradigm (distributed), with nothing but a command-line app and some online tutorials. This would be a step backwards. So what do you think... is Git ready? What decent tools exist for Git and what third party development apps support it? EDIT: My original question was pretty vague so I'm updating it to specifically ask for a list of available tools and 3rd party support for Git. Maybe we can get a community wiki post with a list of stuff. I also do not consider 'use subversion' to be an adequate answer. There are other reasons to use a distributed source control system other than offline editing - private and cheap branches being one of them.

    Read the article

  • SVN supports historical merges so how is Mercurial better?

    - by radman
    Hi, I'm a long time SVN user and have been hearing a lot of brou ha ha with regard to mercurial and decentralised version control systems in general. The main touted feature that I am aware of is that merging in Mercurial is much easier because it records information for each merge so each successive merge is aware of the previous ones. Now as stated in the red book, in the section to do with merging, SVN already supports this with mergeinfo. Now I have not actually used this feature (although I wanted to, our repo version wasn't recent enough) but is this SVN feature particularly different to what Mercurial offers? For anyone who is not aware the suggested work flow for historical merging in svn is this: branch from the development trunk to do your own thing. Regularly merge changes from trunk into your branch to stay up to date. Merge back when your done with the mergeinfo to smooth the process. Without historical data merging this is a nightmare because the comparison is strictly on the differences in the files and does not take into account the steps taken on the way. So each change in the development trunk puts you further into possible conflict when you merge back. Now what I would like to know is: Does merging using Mercurial provide a significant advantage when compared with mergeinfo in SVN or is this just a lot of hot air about nothing? Has anyone used the mergeinfo feature in SVN and how good is it actually in practice?

    Read the article

  • Problem in accessing Windows shared folder on Ubuntu using terminal

    - by vikramtheone
    Hi Guys, Description I have 2 systems with me, one running on Windows(Host) and one on Ubuntu, both on a LAN. On the Windows(Host) I develop software intended for the Linux system and because the Linux system has little external memory, my idea to overcome this is by making the project folder on the Host side a Shared Folder with full access and access it on Ubuntu over the network. To achieve this, I have installed Samba on Ubuntu, when I go to Places -> Network I can see the shared project folder and I simply mount it. A link appears on the desktop. Next, using Nautilus I open the link and I can access the contents of the shared folder. Problem Even though I mount the shared project folder, I don't see it appearing in the /media or the /mnt folder, as a result of this I don't know what path to use to access this folder, from the terminal. For example: When, I mounted my USB stick, as expected, a link for the device appears on the Desktop and I also see a folder in the media folder. So, similarly, a mounted shared folder should have appeared on the /mnt folder, too. Can anyone suggest what I should do now? There are many posts around, but no solid solution for this problem. Help!!! :) Vikram

    Read the article

  • File Storage for Web Applications: Filesystem vs DB vs NoSQL engines

    - by El Yobo
    I have a web application that stores a lot of user generated files. Currently these are all stored on the server filesystem, which has several downsides for me. When we move "folders" (as defined by our application) we also have to move the files on disk (although this is more due to strange design decisions on the part of the original developers than a requirement of storing things on the filesystem). It's hard to write tests for file system actions; I have a mock filesystem class that logs actions like move, delete etc, without performing them, which more or less does the job, but I don't have 100% confidence in the tests. I will be adding some other jobs which need to access the files from other service to perform additional tasks (e.g. indexing in Solr, generating thumbnails, movie format conversion), so I need to get at the files remotely. Doing this over network shares seems dodgy... Dealing with permissions on the filesystem as sometimes given us problems in the past, although now that we've moved to a pure Linux environment this should be less of an issue. What are the downsides of storing files as BLOBs in MySQL? I guess that it would massively increase the database size and reduce the effectiveness of caches, but are there other problems? Do the same problems exist with NoSQL systems like Cassandra? Does anyone have any other suggestions that might be appropriate?

    Read the article

  • JBoss deployment throws 'java.util.zip.ZipException: error in opening zip file' on Linux?

    - by Kaushalya
    I thought of posting both the question and the answer for others' knowledge. I deployed a large EAR (contained more than ~1024 jars/wars) on JBoss running with Java 6 on Linux, and the deployment process cried throwing the following exception: java.lang.RuntimeException: java.util.zip.ZipException: error in opening zip file) at org.jboss.deployment.DeploymentException.rethrowAsDeploymentException(DeploymentException.java:53) at org.jboss.deployment.MainDeployer.init(MainDeployer.java:901) at org.jboss.deployment.MainDeployer.init(MainDeployer.java:895) at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:809) at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:782) .... Caused by: java.lang.RuntimeException: java.util.zip.ZipException: error in opening zip file at org.jboss.util.file.JarArchiveBrowser.<init>(JarArchiveBrowser.java:74) at org.jboss.util.file.FileProtocolArchiveBrowserFactory.create(FileProtocolArchiveBrowserFactory.java:48) at org.jboss.util.file.ArchiveBrowser.getBrowser(ArchiveBrowser.java:57) at org.jboss.ejb3.EJB3Deployer.hasEjbAnnotation(EJB3Deployer.java:213) .... This was caused by the 'limit of number of open file descriptors' of Linux/Unix operating systems. The default is 1024. You can check the default value using: ulimit -n To increase the number of open file descriptors (say to 2048): ulimit -n 2048 Check the man page of ulimit for more details.

    Read the article

  • Why does Tex/Latex not speed up in subsequent runs?

    - by Debilski
    I really wonder, why even recent systems of Tex/Latex do not use any caching to speed up later runs. Every time that I fix a single comma*, calling Latex costs me about the same amount of time, because it needs to load and convert every single picture file. (* I know that even changing a tiny comma could affect the whole structure but of course, a well-written cache format could see the impact of that. Also, there might be situations where 100% correctness is not needed as long as it’s fast.) Is there something in the language of Tex which makes this complicated or impossible to accomplish or is it just that in the original implementation of Tex, there was no need for this (because it would have been slow anyway on those large computers)? But then on the other hand, why doesn’t this annoy other people so much that they’ve started a fork which has some sort of caching (or transparent conversion of Tex files to a format which is faster to parse)? Is there anything I can do to speed up subsequent runs of Latex? Except from putting all the stuff into chapterXX.tex files and then commenting them out?

    Read the article

  • What languages should a microISV use to write commercial software?

    - by Wal
    I've been writing software in Java for many years now, but it was always for internal applications that would be deployed to a server. I'd like to get into writing desktop applications now but I don't know where to start. I've written a few Java/Swing applications but again they were for internal use. My understanding is that Java and other semi-compiled and interpreted languages are too easy to reverse engineer, making them unsuitable for commercial software. I am aware that there are compilers for Java and some other interpreted language, but I've also heard that they are pricey and/or unreliable. Assuming I start a microISV and wish to develop and sell applications to a broad audience, what's my best bet? I would prefer something that can be written close to once, and compiled for different operating systems but I am not opposed to .NET and a Windows-only audience if other languages would compromise the experience (installation ease & user experience) in Windows. My only issue there is that I don't have a large starting budget and paying out the wazoo for the required development tools is not really in the cards.

    Read the article

  • git crlf configuration in mixed environment

    - by Jonas Byström
    I'm running a mixed environment, and keep a central, bare repository where I pull and push most of my stuff. This centralized repository runs on Linux, and I check out to Windows XP/7, Mac and Linux. In all repositories I put the following line in my .git/config: [core] autocrlf = true I don't have the flag safecrlf=true anywhere. First time when I modify stuff on my one Windows machine (XP) there is no problem and when I look at the diff, it looks fine. But when I do the same on the other Windows machine (7), all lines are shown as changed but local line endings are \r\n as expected (when checked in a hex editor). The same applies to a MacOSX can. Sometimes I get the feeling that the different systems wrestle on line endings, but I can't be sure (I'm loosing track of all the times I change specific files). I didn't use to have the autocrlf set, but set the flag many months back. Could that be causing my current problems? Do I need to clone everything again to loose some old baggage? Or are there other things that needs configuring too? I tried git checkout -- . about a million times, but with no success.

    Read the article

  • Slow Python HTTP server on localhost

    - by Abiel
    I am experiencing some performance problems when creating a very simple Python HTTP server. The key issue is that performance is varying depending on which client I use to access it, where the server and all clients are being run on the local machine. For instance, a GET request issued from a Python script (urllib2.urlopen('http://localhost/').read()) takes just over a second to complete, which seems slow considering that the server is under no load. Running the GET request from Excel using MSXML2.ServerXMLHTTP also feels slow. However, requesting the data Google Chrome or from RCurl, the curl add-in for R, yields an essentially instantaneous response, which is what I would expect. Adding further to my confusion is that I do not experience any performance problems for any client when I am on my computer at work (the performance problems are on my home computer). Both systems run Python 2.6, although the work computer runs Windows XP instead of 7. Below is my very simple server example, which simply returns 'Hello world' for any get request. from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer class MyHandler(BaseHTTPRequestHandler): def do_GET(self): print("Just received a GET request") self.send_response(200) self.send_header("Content-type", "text/html") self.end_headers() self.wfile.write('Hello world') return def log_request(self, code=None, size=None): print('Request') def log_message(self, format, *args): print('Message') if __name__ == "__main__": try: server = HTTPServer(('localhost', 80), MyHandler) print('Started http server') server.serve_forever() except KeyboardInterrupt: print('^C received, shutting down server') server.socket.close() Note that in MyHandler I override the log_request() and log_message() functions. The reason is that I read that a fully-qualified domain name lookup performed by one of these functions might be a reason for a slow server. Unfortunately setting them to just print a static message did not solve my problem. Also, notice that I have put in a print() statement as the first line of the do_GET() routine in MyHandler. The slowness occurs prior to this message being printed, meaning that none of the stuff that comes after it is causing a delay.

    Read the article

  • Updating a local sqlite db that is used for local metadata & caching from an service?

    - by Pharaun
    I've searched through the site and haven't found a question/answer that quite answer my question, the closest one I found was: Syncing objects between two disparate systems best approach. Anyway to begun, because there is no RSS feeds available, I'm screen scrapping a webpage, hence it does a fetch then it goes through the webpage to scrap out all of the information that I'm interested in and dumps that information into a sqlite database so that I can query the information at my leisure without doing repeat fetching from the website. However I'm also storing various metadata on the data itself that is stored in the sqlite db, such as: have I looked at the data, is the data new/old, bookmark to a chunk of data (Think of it as a collection of unrelated data, and the bookmark is just a pointer to where I am in processing/reading of the said data). So right now my current problem is trying to figure out how to update the local sqlite database with new data and/or changed data from the website in a manner that is effective and straightforward. Here's my current idea: Download the page itself Create a temporary table for the parsed data to go into Do a comparison between the official and the temporary table and copy updates and/or new information to the official table This process seems kind of complicated because I would have to figure out how to determine if the data in the temporary table is new, updated, or unchanged. So I am wondering if there isn't a better approach or if anyone has any suggestion on how to architecture/structure such system?

    Read the article

  • How can I tackle 'profoundly found elsewhere' syndrome (inverse of NIH)?

    - by Alistair Knock
    How can I encourage colleagues to embrace small-scale innovation within our team(s), in order to get things done quicker and to encourage skills development? (the term 'profoundly found elsewhere' comes from Wikipedia, although it is scarcely used anywhere else apart from a reference to Proctor & Gamble) I've worked in both environments where there is a strong opposition to software which hasn't been developed in-house (usually because there's a large community of developers), and more recently (with far fewer central developers) where off-the-shelf products are far more favoured for the usual reasons: maintenance, total cost over product lifecycle, risk management and so on. I think the off the shelf argument works in the majority of cases for the majority of users, even though as a developer the product never quite does what I'd like it to do. However, in some cases there are clear gaps where the market isn't able to provide specifically what we would need, or at least it isn't able to without charging astronomical consultancy rates for a bespoke solution. These can be small web applications which provide a short-term solution to a particular need in one specific department, or could be larger developments that have the potential to serve a wider audience, both across the organisation and into external markets. The problem is that while development of these applications would be incredibly cheap in terms of developer hours, and delivered very quickly without the need for glacial consultation, the proposal usually falls flat because of risk: 'Who'll maintain the project tracker that hasn't had any maintenance for the past 7 years while you're on holiday for 2 weeks?' 'What if one of our systems changes and the connector breaks?' 'How can you guarantee it's secure/better/faster/cheaper/holier than Company X's?' With one developer behind these little projects, the answers are invariably: 'Nobody, but...' 'It will break, just like any other application would...' 'I, uh...' How can I better answer these questions and encourage people to take a little risk in order to stimulate creativity and fast-paced, short-lifecycle development instead of using that 6 months to consult about what tender process we might use?

    Read the article

  • how to implement a sparse_vector class

    - by Neil G
    I am implementing a templated sparse_vector class. It's like a vector, but it only stores elements that are different from their default constructed value. So, sparse_vector would store the index-value pairs for all indices whose value is not T(). I am basing my implementation on existing sparse vectors in numeric libraries-- though mine will handle non-numeric types T as well. I looked at boost::numeric::ublas::coordinate_vector and eigen::SparseVector. Both store: size_t* indices_; // a dynamic array T* values_; // a dynamic array int size_; int capacity_; Why don't they simply use vector<pair<size_t, T>> data_; My main question is what are the pros and cons of both systems, and which is ultimately better? The vector of pairs manages size_ and capacity_ for you, and simplifies the accompanying iterator classes; it also has one memory block instead of two, so it incurs half the reallocations, and might have better locality of reference. The other solution might search more quickly since the cache lines fill up with only index data during a search. There might also be some alignment advantages if T is an 8-byte type? It seems to me that vector of pairs is the better solution, yet both containers chose the other solution. Why?

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >