Search Results

Search found 7019 results on 281 pages for 'adaptive systems'.

Page 239/281 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • google maps api keys to be set webserver-wide, (as env var? inside apache?)

    - by ~knb
    I have a web site with many virtual hosts and each registered with several domain names (ending in .org, .de), site1.mysite.de, site2.mysite.org Then I have different templating systems based on several programming languages (perl and php) in use on the web server. The Google Maps Api requires a unique Google Maps api key for each vhost. I want to have something like a web-server wide variable $goomapkey that I can call from inside my code. In PHP code, Now I have a kludgy case-analysis solution like $domain = substr($_SERVER['SERVER_NAME'], -3); if (".de" == $domain){ //if ("xxxxxx" eq substr($ENV{SERVER_NAME}, 0, 5)){ // $gookey = "ABQIAAA..."; //} else { //site1.de $gookey = "ABQIAAAA1Js..."; //} } elseif ("dev" == substr($_SERVER['SERVER_NAME'], 0, 3)){ //dev.mysite.org $gookey = "ABQIAAAA1JsSb..."; } else { //www.mysite.org $gookey = "ABQIAAAA1JsS..."; //TODO: Add more keys for each virtual host, for my.machinename.de, IP-address based URL, ... } ... inside my php-based CMS. A non-ideal solution, because it is, php-only, and I still have to set it at several html templates inside the CMS, and there are too many cases. I want the google maps api key to be set by the apache web server who examines the request *early in the request loop before any php page template code is constructed and evaluated. is an environment variable a good solution? which technology should be used to set the $goomapkey variable? I'd prefer mod_perl2 Apache request handler, but the documentation is confusing (many API changes in the past ). Which Apache module could I use? Is there a built-in Apache module that does the same thing?

    Read the article

  • Python refuses text.replace() in one environment

    - by gx
    Hi fellow programmers, I've been mocking about with the following bit of dirty support-code for a pylons app, which works fine in a python-shell, a separate python file, or when running in paster. Now, we've put the application on-line through mod_wsgi and apache and this specific piece of code stopped working completely. First off, the code itself: def fixStyle(self, text): t = text.replace('<p>', '<p style="%s">' % (STYLEDEF,)) t = t.replace('class="wide"', 'style="width: 125px; %s"' % (DEFSTYLE,)) t = t.replace('<td>', '<td style="%s">' % (STYLEDEF,)) t = t.replace('<a ', '<a style="%s" ' % (LINKSTYLE,)) return t It seems pretty straightforward, and to be honest, it is. So what happens when I put a piece of text in it, for example: <table><tr><td>Test!</td></tr></table> The output should be: <table><tr><td style="stuff-from-styledef">Test!</td></tr></table> and it is, on most systems. When we put it through the app on Apache/mod_wsgi though, the following happens: <table><tr><td>Test!</td></tr></table> You guessed it. I'm currently at a loss and have no idea where to go next. Googling doesn't really work out, so I'm hoping on you guys to help out and perhaps point out a fundamental issue with using whatever-is-causing-this. If anything is missing I'll edit it in.

    Read the article

  • Void* array casting to float, int32, int16, etc.

    - by Griffin
    Hey guys, I've got an array of PCM data, it could be 16 bit, 24 bit packed, 32 bit, etc.. It could be signed, or unsigned, and it could be 32 or 64 bit floating point. It is currently stored as a "void**" matrix, indexed by channel, then by frame. The goal is to allow my library to take in any PCM format and buffer it, without requiring manipulation of the data to fit a designated structure. If the A/D converter spits out 24 bit packed arrays of interleaved PCM, I need to accept it gracefully. I also need to support 16 bit non interleaved, as well as any permutation of the above formats. I know the bit depth and other information at runtime, and I'm trying to code efficiently while not duplicating code. What I need is an effective way to cast the matrix, put PCM data into the matrix, and then pull it out later. I can cast the matrix to int32_t, or int16_t for the 32 and 16 bit signed PCM respectively, I'll probably have to store the 24 bit PCM in an int32_t for 32 bit, 8 bit byte systems as well. Can anyone recommend a good way to put data into this array, and pull it out later? I'd like to avoid large sections of code which look like: switch( mFormat ) { case 1: // unsigned 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (uint8_t*)pcm[i]; break; case 2: // signed 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (int8_t*)pcm[i]; break; case 3: // unsigned 16 bit ... Limitations: I'm working in C/C++, no templates, no RTTI, no STL. Think embedded. Things get trickier when I have to port this to a DSP with 16 bit bytes. Does anybody have any useful macros they might be willing to share? Thanks, -Griff

    Read the article

  • How can I abstract out the core functionality of several Rails applications?

    - by hornairs
    I'd like to develop a number of non-trivial Rails applications which all implement a core set of functionality but each have certain particular customizations, extensions, and aesthetic differences. How can I pull the core functionality (models, controllers, helpers, support classes, tests) common to all these systems out in such a way that updating the core will benefit every application based upon it? I've seen Rails Engines but they seem to be too detached, almost too abstracted to be built upon. I can seem them being useful for adding one component to an existing app, for example bolting on a blog engine to your existing e-commerce site. Since engines seem to be mostly self contained, it seems difficult and inconvenient to override their functionality and views while keeping DRY. I've also considered abstracting the code into a gem, but this seems a little odd. Do I make the gem depend on the Rails gems, and the define models & controllers inside it, and then subclass them in my various applications? Or do I define many modules inside the gem that I include in the different spots inside my various applications? How do I test the gem and then test the set of customizations and overridden functionality on top of it? I'm also concerned with how I'll develop the gem and the Rails apps in tandem, can I vendor a git repository of the gem into the app and push from that so I don't have to build a new gem every iteration? Also, are there private gem hosts/can I set my own gem source up? Also, any general suggestions for this kind of undertaking? Abstraction paradigms to adhere to? Required reading? Comments from the wise who have done this before? Thanks!

    Read the article

  • Finding missing files by checksum

    - by grw
    Hi there, I'm doing a large data migration between two file systems (let's call them F1 and F2) on a Linux system which will necessarily involve copying the data verbatim into a differently-structured hierarchy on F2 and changing the file names. I'd like to write a script to generate a list of files which are in F1 but not in F2, i.e. the ones which weren't copied by the migration script into the new hierarchy, so that I can go back and migrate them manually. Unfortunately for reasons not worth going into, the migration script can't be modified to list files that it doesn't migrate. My question differs from this previously answered one because of the fact that I cannot rely on filenames as a comparison. I know the basic outline of the process would be: Generate a list of checksums for all files, recursing through F1 Do the same for F2 Compare the lists and generate a negative intersection of the checksums, ignoring the file names, to find files which are in F1 but not in F2. I'm kind of stuck getting past that stage, so I'd appreciate any pointers on which tools to use. I think I need to use the 'comm' command to compare the list of file checksums, but since md5sum, sha512sum and the like put the file name next to the checksum, I can't see a way to get it to bring me a useful comparison. Maybe awk is the way to go? I'm using Red Hat Enterprise Linux 5.x. Thanks.

    Read the article

  • Invoke Command When "ENTER" Key Is Pressed In XAML

    - by bitxwise
    I want to invoke a command when ENTER is pressed in a TextBox. Consider the following XAML: <UserControl ... xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity" ...> ... <TextBox> <i:Interaction.Triggers> <i:EventTrigger EventName="KeyUp"> <i:InvokeCommandAction Command="{Binding MyCommand}" CommandParameter="{Binding Text}" /> </i:EventTrigger> </i:Interaction.Triggers> </TextBox> ... </UserControl> and that MyCommand is as follows: public ICommand MyCommand { get { return new DelegateCommand<string>(MyCommandExecute); } } private void MyCommandExecute(string s) { ... } With the above, my command is invoked for every key press. How can I restrict the command to only invoke when the ENTER key is pressed? I understand that with Expression Blend I can use Conditions but those seem to be restricted to elements and can't consider event arguments. I have also come across SLEX which offers its own InvokeCommandAction implementation that is built on top of the Systems.Windows.Interactivity implementation and can do what I need. Another consideration is to write my own trigger, but I'm hoping there's a way to do it without using external toolkits.

    Read the article

  • Wicket application + Apache + mod_jk - AJP queues are filling up!

    - by nojyarg
    Dear community, We are having a Wicket-based Java application deployed in a production server cluster using Apache (2.2.3) with mod_jk (1.2.30) as load balancing component w/ sticky session and Jboss 5 as application container for the Java application. We are inconsistently seeing an issue in our production environment where our AJP queues between Apache and Jboss as shown in the JMX console fill up with requests to the point where the application server is no longer taking on any new requests. When looking at all involved system components (overall traffic, load db, process list db, load of all clustered application server nodes) nothing points towards a capacity issue which would explain why the calls are being stalled in the AJP queue. Instead all systems appear sufficiently idle. So far, our only remedy to this issue is to restart the appservers and the load balancer which only occasionally clears the AJP queues. We are trying to figure out why the queues are filling up to the point that no calls get returned to the end user although the system is not under a high load. Has anyone else experienced similar problems? Are there any other system metrics we should monitor that could explain the queuing behavior? Is this potentially a mod_jk issue? If so, is it advisable to swap mod_jk with mod_cluster to resolve the issue? Any advice is highly appreciated. If I can provide additional information for the sake of troubleshooting I would be more than willing to do so. /Ben

    Read the article

  • Better language or checking tool?

    - by rwallace
    This is primarily aimed at programmers who use unmanaged languages like C and C++ in preference to managed languages, forgoing some forms of error checking to obtain benefits like the ability to work in extremely resource constrained systems or the last increment of performance, though I would also be interested in answers from those who use managed languages. Which of the following would be of most value? A language that would optionally compile to CLR byte code or to machine code via C, and would provide things like optional array bounds checking, more support for memory management in environments where you can't use garbage collection, and faster compile times than typical C++ projects. (Think e.g. Ada or Eiffel with Python syntax.) A tool that would take existing C code and perform static analysis to look for things like potential null pointer dereferences and array overflows. (Think e.g. an open source equivalent to Coverity.) Something else I haven't thought of. Or put another way, when you're using C family languages, is the top of your wish list more expressiveness, better error checking or something else? The reason I'm asking is that I have a design and prototype parser for #1, and an outline design for #2, and I'm wondering which would be the better use of resources to work on after my current project is up and running; but I think the answers may be useful for other tools programmers also. (As usual with questions of this nature, if the answer you would give is already there, please upvote it.)

    Read the article

  • Information about PTE's (Page Table Entries) in Windows

    - by Patrick
    In order to find more easily buffer overflows I am changing our custom memory allocator so that it allocates a full 4KB page instead of only the wanted number of bytes. Then I change the page protection and size so that if the caller writes before or after its allocated piece of memory, the application immediately crashes. Problem is that although I have enough memory, the application never starts up completely because it runs out of memory. This has two causes: since every allocation needs 4 KB, we probably reach the 2 GB limit very soon. This problem could be solved if I would make a 64-bit executable (didn't try it yet). even when I only need a few hundreds of megabytes, the allocations fail at a certain moment. The second problem is the biggest one, and I think it's related to the maximum number of PTE's (page table entries, which store information on how Virtual Memory is mapped to physical memory, and whether pages should be read-only or not) you can have in a process. My questions (or a cry-for-tips): Where can I find information about the maximum number of PTE's in a process? Is this different (higher) for 64-bit systems/applications or not? Can the number of PTE's be configured in the application or in Windows? Thanks, Patrick PS. note for those who will try to argument that you shouldn't write your own memory manager: My application is rather specific so I really want full control over memory management (can't give any more details) Last week we had a memory overwrite which we couldn't find using the standard C++ allocator and the debugging functionality of the C/C++ run time (it only said "block corrupt" minutes after the actual corruption") We also tried standard Windows utilities (like GFLAGS, ...) but they slowed down the application by a factor of 100, and couldn't find the exact position of the overwrite either We also tried the "Full Page Heap" functionality of Application Verifier, but then the application doesn't start up either (probably also running out of PTE's)

    Read the article

  • Where is mpx386.6 and start.c in Minix 3.2?

    - by John Bowlinger
    I'm trying to follow along in Operating Systems and Implementation 3rd edition and I'm now at the part in the book where Tanenbaum is discussing bootup and kernel process switching. He keeps referring to these 2 files (mpx386.s, start.c) that are supposedly in a directory called kernel, but I can't seem to find them. In the root directory, when I go to boot/minix/3.2.0/kernel, kernel just seems to be a binary file that is illegible in terminal. There also seems to be a bunch of mod01-mod12 gz binary files as well in the 3.2.0 directory. Am I in the wrong directory, or is there something I need to install and do to read kernel? I would like to follow along with the book to what's on my screen, instead of constantly flipping back and forth. I realize alot of files are completely different from this book published in 2006 and I accept that, but this seems to be a critical juncture of the book and the operating system as a whole. If it's any consolation, I'm running the OS in Virtualbox on a 64-bit Macbook.

    Read the article

  • Are there any NOSQL-compatible CMS projects?

    - by Michael
    This question is partially related to an older question (Any CMS is Google App Engine compatible?) , but is slightly more general. It seems that in most CMS systems, the most fragile failure point is the database. Traditional database implementations scale poorly and will never be able to handle unforeseen spikes of traffic. Since Google App Engine was designed to help even small businesses overcome that problem, I had the same question that was asked earlier this year with less than satisfactory answers. But more generally, where are the CMS projects that support NOSQL databases? Looking over Wikipedia's list of CMS platforms, I see without much effort that only traditional RDBMS are supported by every single vendor on the list. I would have expected to see at least one or two projects handling CouchDB or similar engines. I understand the complexities of implementing a NOSQL solution to a problem that is typically solved using the relations cleanly expressed in any RDBMS, but there seems to be a rather wide market gap. Since databases are, today, easily outsourced to Google, Amazon, and others which use NOSQL models, I am amazed that there are not more projects actively pursuing this path. Am I simply not aware? Can someone please point me to projects that have real momentum that are developing on this path? I'm looking for two things: a CMS that has as its backbone a NOSQL database enabling easy database outsourcing (hosted MySQL clusters and similar solutions are not what I'm looking for) a project that is built to run on either a PaaS architecture like Google App Engine or an IaaS architecture like Amazon EC2 Any pointers in that direction would be most welcome.

    Read the article

  • What can cause a spontaneous EPIPE error without either end calling close() or crashing?

    - by Hongli
    I have an application that consists of two processes (let's call them A and B), connected to each other through Unix domain sockets. Most of the time it works fine, but some users report the following behavior: A sends a request to B. This works. A now starts reading the reply from B. B sends a reply to A. The corresponding write() call returns an EPIPE error, and as a result B close() the socket. However, A did not close() the socket, nor did it crash. A's read() call returns 0, indicating end-of-file. A thinks that B prematurely closed the connection. Users have also reported variations of this behavior, e.g.: A sends a request to B. This works partially, but before the entire request is sent A's write() call returns EPIPE, and as a result A close() the socket. However B did not close() the socket, nor did it crash. B reads a partial request and then suddenly gets an EOF. The problem is I cannot reproduce this behavior locally at all. I've tried OS X and Linux. The users are on a variety of systems, mostly OS X and Linux. Things that I've already tried and considered: Double close() bugs (close() is called twice on the same file descriptor): probably not as that would result in EBADF errors, but I haven't seen them. Increasing the maximum file descriptor limit. One user reported that this worked for him, the rest reported that it did not. What else can possibly cause behavior like this? I know for certain that neither A nor B close() the socket prematurely, and I know for certain that neither of them have crashed because both A and B were able to report the error. It is as if the kernel suddenly decided to pull the plug from the socket for some reason.

    Read the article

  • Continuous build infrastructure recommendations for primarily C++; GreenHills Integrity

    - by andersoj
    I need your recommendations for continuous build products for a large (1-2MLOC) software development project. Characteristics: ClearCase revision control Approx 80% C++; 15% Java; 5% script or low-level Compiles for Green Hills Integrity OS, but also some windows and JVM chunks Mostly an embedded system; also includes some UI pieces and some development support (simulation tools, config tools, etc...) Each notional "version" of the deliverable includes deployment images for a number of boards, UI machines, etc... (~10 separate images; 5 distinct operating systems) Need to maintain/track many simultaneous versions which, notably, are built for a variety of different board support packages Build cycle time is a major issue on the project, need support for whatever features help address this (mostly need to manage a large farm of build machines, I guess..) Operates in a secure environment (this is a gov't program) (Edited to add: This is a classified program; outsourcing the build infrastructure is a non-starter.) Interested in any best practices or peripheral guidance you might offer. The build automation issues is one of several overlapping best practices that appear to be missing on the program, but try to keep your answers focused on build infrastructure piece and observations directly related. Cost is not an object. Scalability and ease of retrofitting onto an existing infrastructure are key. JA

    Read the article

  • Link failure with either abnormal memory consumption or LNK1106 in Visual Studio 2005.

    - by Corvin
    Hello, I am trying to build a solution for windows XP in Visual Studio 2005. This solution contains 81 projects (static libs, exe's, dlls) and is being successfully used by our partners. I copied the solution bundle from their repository and tried setting it up on 3 similar machines of people in our group. I was successful on two machines and the solution failed to build on my machine. The build on my machine encountered two problems: During a simple build creation of the biggest static library (about 522Mb in debug mode) would fail with the message "13libd\ui1d.lib : fatal error LNK1106: invalid file or disk full: cannot seek to 0x20101879" Full solution rebuild creates this library, however when it comes to linking the library to main .exe file, devenv.exe spawns link.exe which consumes about 80Mb of physical memory and 250MB of virtual and spawns another link.exe, which does the same. This goes on until the system runs out of memory. On PCs of my colleagues where successful build could be performed, there is only one link.exe process which uses all the memory required for linking (about 500Mb physical). There is a plenty of hard drive space on my machine and the file system is NTFS. All three of our systems are similar - Core2Quad processors, 4Gb of RAM, Windows XP SP3. We are using Visual studio installed from the same source. I tried using a different RAM and CPU, using dedicated graphics adapter to eliminate possibility of video memory sharing influencing the build, putting solution files to different location, using different versions of VS 2005 (Professional, Standard and Team Suite), changing the amount of available virtual memory, running memtest86 and building the project from scratch (i.e. a clean bundle). I have read what MSDN says about LNK1106, none of the cases apply to me except for maybe "out of heap space", however I am not sure how I should fight this. The only idea that I have left is reinstalling the OS, however I am not sure that it would help and I am not sure that my situation wouldn't repeat itself on a different machine. Would anyone have any sort of advice for me? Thanks

    Read the article

  • Git is not using the first editor in my $PATH

    - by GuillaumeA
    I am using OS X 10.8, and I used brew to install a more recent version of emacs than the one shipped with OS X. The newer emacs binary is installed in /usr/local/bin (24.2.1), and the old "shipped-with-osx" one in /usr/bin (22.1.1). I updated my $PATH env variable by prepending /usr/local/bin to it. It works fine in my shell (ie. typing emacs runs the 24.2.1 version), but when git opens the editor, the emacs version is 22.1.1. Isn't git supposed to use $PATH to find the editor I want to use ? Additional informations: $ type -a emacs emacs is /usr/local/bin/emacs emacs is /usr/bin/emacs emacs is /usr/local/bin/emacs $ env PATH=/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin SHELL=/bin/zsh PAGER=most EDITOR=emacs -nw _=/usr/bin/env Please note that I'd prefer not to set the absolute path of my editor directly in my git conf, as I use this conf across multiple systems. EDIT: Here's an bit of my .zshrc: # Mac OS X if [ `uname` = "Darwin" ]; then # Brew binaries PATH="/usr/local/bin":"/usr/local/sbin":$PATH else # Everyone else (Linux) # snip fi So, yes, I could add a line export EDITOR='/usr/local/bin emacs -nw' in the first if, but I'd like to understand why git is not using my PATH variable :)

    Read the article

  • Toolbox/framework to construct lightweight public-facing web site

    - by aSteve
    I am aware of full-blown content management systems (CMS) such as SugarCRM and TikiWiki... where content is typically stored in a database... and edited through the same interface as it is published. While I like many of the features, the product is clearly aimed at enterprise-wide use rather than to be public-facing. What I'd like to establish are potential alternatives that fill the space between full-blown CMS and hand-coded bespoke site. I like the way that I can add modules to my CMS... allowing me to quickly introduce new functionality, and I'd like an analogous feature in a system for public web-content. Modules I know I'd like include moderated comments; web-form-to-email gateway; menus/tabs... in future, perhaps mapping or diaries or RSS integration - etc. Where my requirements differ from a CMS, I don't need (or want) most content to be editable through the main site... and, somehow, I do want to be able to preview how updates will be presented to the public rather than to make live changes. For these purposes, in contrast to those where a typical CMS would be ideal, presentation is of paramount importance - and trumps any desire to immediately disseminate information. I realise that this is a very high-level question... (suggestions of additional tags welcome) - I mentioned PHP only as - ideally - I'm looking for an open source solution and a PHP deployment is an easy option. What are my options?

    Read the article

  • How can I specify processor affinity?

    - by BenAlabaster
    I have an application that's having some trouble handling multi-processor systems. It's not an app that I have a particular affection for modifying and would like to avoid it if possible. However, I'm not above modifying the code if I have to. The application is written in VBA (and hence my inclination to avoid touching it). We've noticed that the application seems to run pretty smoothly if we set the processor affinity to a single processor using task manager, only manifesting instability when processor affinity isn't set. I know that I can specify the processor affinity of a task using .NET and as such, there lies a possibility of me writing a shell application that could be used to run legacy applications with a specified processor affinity, does anyone have any experience with this and can throw out some ideas as to headaches I'm likely to run into with this approach? The other question is: Is it in fact possible to modify the core VBA product to handle its own processor affinity? I've never had to handle this with any of my applications natively so this (at this point in time) is completely outside my realm of expertise. Thanks in advance

    Read the article

  • A generic C++ library that provides QtConcurrent functionality?

    - by Lucas
    QtConcurrent is awesome. I'll let the Qt docs speak for themselves: QtConcurrent includes functional programming style APIs for parallel list processing, including a MapReduce and FilterReduce implementation for shared-memory (non-distributed) systems, and classes for managing asynchronous computations in GUI applications. For instance, you give QtConcurrent::map() an iterable sequence and a function that accepts items of the type stored in the sequence, and that function is applied to all the items in the collection. This is done in a multi-threaded manner, with a thread pool equal to the number of logical CPU's on the system. There are plenty of other function in QtConcurrent, like filter(), filteredReduced() etc. The standard CompSci map/reduce functions and the like. I'm totally in love with this, but I'm starting work on an OSS project that will not be using the Qt framework. It's a library, and I don't want to force others to depend on such a large framework like Qt. I'm trying to keep external dependencies to a minimum (it's the decent thing to do). I'm looking for a generic C++ framework that provides me with the same/similar high-level primitives that QtConcurrent does. AFAIK boost has nothing like this (I may be wrong though). boost::thread is very low-level compared to what I'm looking for. I know C# has something very similar with their Parallel Extensions so I know this isn't a Qt-only idea. What do you suggest I use?

    Read the article

  • C# How to create various objects at runtime that can hold strongly typed data?

    - by JL
    Is it possible to create objects at runtime without having to have hard coded class definitions, then populate properties with primitives or even strongly typed data types? For example: Lets say I want to an XML config file that could hold configuration values for connecting to various systems in an SOA application. In C# I read in these values, but for each system the properties are different (e.g: SQL might have a connection string, while SharePoint might need a username + password + domain + url, while yet an smtp server would need username + password + port + url) So instead of creating static classes as follows public class SharePointConfiguration or public class SQLConfiguration, then have each class with custom properties (this is cumbersome) Is there not a more preferred way to achieve this, without using 1990's methods, in otherwords it would still be nice to have intellisense and code completion and named properties. Since this collection of properties (object) would be passed within the class and possible to other classes from function to function I am also wondering where this class definition would get defined if its all happening at run time. Any recommendations, and hope the question was clear enough. Would like to use language features, not hacks. Thank you.

    Read the article

  • Enterprise Platform in Python, Design Advice

    - by Jason Miesionczek
    I am starting the design of a somewhat large enterprise platform in Python, and was wondering if you guys can give me some advice as to how to organize the various components and which packages would help achieve the goals of scalability, maintainability, and reliability. The system is basically a service that collects data from various outside sources, with each outside source having its own separate application. These applications would poll a central database and get any requests that have been submitted to perform on the external source. There will be a main website and REST/SOAP API that should also have access to the central data service. My initial thought was to use Django for the web site, web service and data access layer (using its built-in ORM), and then the outside source applications can use the web service(s) to get the information they need to process the request and save the results. Using this method would allow me to have multiple instances of the service applications running on the same or different machines to balance out the load. Are there more elegant means of accomplishing this? i've heard of messaging systems such as MQ, would something like that be beneficial in this scenario? My other thought was to use a completely separate data service not based on Django, and use some kind of remoting or remote objects (in they exist in Python) to interact with the data model. The downside here would be with the website which would become much slower if it had to push all of its data requests through a second layer. I would love to hear what other developers have come up with to achieve these goals in the most flexible way possible.

    Read the article

  • How to push further as a programmer?

    - by MaXX
    For the last, hmm, 6 months I've been reading into Programming in C, I got myself K&Rv2, BEEJ's socket guide, Expert C programming, Linux Systems Programming, the ISO/IEC 9899:1999 specification (real, and not draft). After receiving them from Amazon, I got Linux installed, and got to it. I'm done with K&R, about halfway through Expert C Programming, but still feel weak as a programmer, I'm sure it takes much more than 6 months of reading to become truly skilled, but my question is this: I've done all the exercises in K&Rv2 (in chapter 1) and some in other chapters, most of which are generally really boring. How do I lift my skills, and become truly great? I've invested money, time and a general lifestyle for something I truly desire, but I'm not sure how exactly to achieve it. Could someone explain to me, perhaps if I need to continuously code, what exactly I'm to code? I'm pretty sure, coding up hello world programs isn't going to teach me any more than I already know about anything. A friend of mine said "read" (with emphasis on read) a man page a day, but reading is all I do, I want to do, but I'm not sure what! I'm interested in security, but I'm not sure as a novice what to code that would be considered enough. Ah, I hope you don't delete this paste :) Thanks

    Read the article

  • C# array of objects - conditional validation

    - by fishdump
    Sorry about the vague title! I have an class with a number of member variables (system, zone, site, ...) public sealed class Cello { public String Company; public String Zone; public String System; public String Site; public String Facility; public String Process; //... } I have an array of objects of this class. private Cello[] m_cellos = null; // ... I need to know whether the array contains objects with the same site but different systems, zones or companies since such a situation would be illegal. I have various other checks to make but they are all along similar lines. The Array class has a number of functions that look promising but I am not very up on defining 'key selector' functions and things like that. Any suggestions or pointers would be greatly appreciated. --- Alistair.

    Read the article

  • Using TXMLDocument to serialize form settings to XML and database.

    - by LukLed
    I have an interface: type IXMLSerializable = interface function SaveToXML : DOMString; function SaveToXMLDocument : IXMLDocument; procedure LoadFromXML(AXML : DOMString); end; It is used to serialize some settings of forms or frames to xml. Simple implementation: SaveToXMLDocument: function TSomething.SaveToXMLDocument: IXMLDocument; begin Result := TXMLDocument.Create(nil); with Result do begin Active := True; with AddChild(Self.Name) do begin AddChild(edSomeTextBox.Name).Attributes['Text'] := edSomeTextBox.Text; end; end; Result := XMLDoc; end; LoadFromXML: procedure TSomething.LoadFromXML(AXML: DOMString); var XMLDoc : IXMLDocument; I : Integer; begin XMLDoc := TXMLDocument.Create(nil); with XMLDoc do begin LoadFromXML(AXML); Active := True; with ChildNodes[0] do begin for I := 0 to ChildNodes.Count-1 do begin If ChildNodes[I].NodeName = 'edSomeTextBox' then edSomeTextBox.Text := ChildNodes[I].Attributes['Text']; end; end; end; end; SaveToXML: function TSomething.SaveToXML: DOMString; begin SaveToXMLDocument.SaveToXML(Result); end; DOMString result of SaveToXML is saved to database to blob field. I had some encoding issues with other implementations and this one works fine (right now). Do you see any dangers in this code? Can I have issues with different settings on various machines and systems?

    Read the article

  • Variable modification in a child process

    - by teaLeef
    I am working on Bryant and O'Hallaron's Computer Systems, A Programmer's Perspective. Exercise 8.16 asks for the output of a program like (I changed it because they use a header file you can download on their website): #include <stdio.h> #include <stdlib.h> #include <sys/types.h> #include <sys/wait.h> #include <errno.h> #include <unistd.h> #include <string.h> int counter = 1; int main() { if (fork() == 0){ counter--; exit(0); } else{ Wait(NULL); printf("counter = %d\n", ++counter); } exit(0); } I answered "counter = 1" because the parent process waits for its children to terminate and then increments counter. But the child first decrements it. However, when I tested the program, I found that the correct answer was "counter = 2". Is the variable "counter" different in the child and in the parent process? If not, then why is the answer 2?

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >