Search Results

Search found 2484 results on 100 pages for 'maintain'.

Page 90/100 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Looking for an Open Source Project in need of help

    - by hvidgaard
    Hi StackOverflow! I'm a CS student on well on my way to graduate. I have had a difficult time of finding relevant student jobs (they seems to be taken merely hours after the notice gets on the board) , so instead I'm looking for an open source project in need of help. I'm aware that I should choose one that I use, but I'm not aware of any OS-project that I use that needs help. That's why I'm asking you. I don't have any deep experience, but I here are some of my biggest projects so far: BitTorrent-ish client in Python (a subset of BitTorrent) HTTP 1.1 webserver in Java Compiler from a subset of Java to run on JRE Flash-framework project to model an iPad look and feel (not to run actual iPad programs) complete with an API for programs. Complete MySQL database for a booking system, with departure and arrival times, so you could only book valid tickets (with a Java frontend). I know, Java and languages like AS3 and C# feels natural per se, Python, and have done a fair bit of hacking around in C, but I don't feel very comfortable with it. Mostly I'm afraid to make a fuckup because I have such a high degree of control. I would like to think I'm well aware of good software design practices, but in reality what I do is ask myself "would I like to use/maintain this?", and I love to refactor my code because I see optimizations. I love algorithms and to make them run in the best possible time. I don't have any preferred domain to work in, but I wouldn't mind it to be graphics or math heavy. Ideally I'm looking for a project in C++ to learn the in's and out's of it, but I'm well aware that I don't know that language very well. I would like to have a mentor-like figure until I'm confident enough to stand on my own, not one to review all my code (I'm sure someone will to start with anyway), but to ask questions about the project and language in question. I do have a wife and two children, so don't expect me to put in 10+ hours every week. In return I can work on my own, I strive to program modular and maintainable code. Know how to read an API, use Google, StackOverflow and online resources in general. If you have any questions, shoot. I'm looking forward to your suggestions.

    Read the article

  • Moving a unit precisely along a path in x,y coordinates

    - by Adam Eberbach
    I am playing around with a strategy game where squads move around a map. Each turn a certain amount of movement is allocated to a squad and if the squad has a destination the points are applied each turn until the destination is reached. Actual distance is used so if a squad moves one position in the x or y direction it uses one point, but moving diagonally takes ~1.4 points. The squad maintains actual position as float which is then rounded to allow drawing the position on the map. The path is described by touching the squad and dragging to the end position then lifting the pen or finger. (I'm doing this on an iPhone now but Android/Qt/Windows Mobile would work the same) As the pointer moves x, y points are recorded so that the squad gains a list of intermediate destinations on the way to the final destination. I'm finding that the destinations are not evenly spaced but can be further apart depending on the speed of the pointer movement. Following the path is important because obstacles or terrain matter in this game. I'm not trying to remake Flight Control but that's a similar mechanic. Here's what I've been doing, but it just seems too complicated (pseudocode): getDestination() { - self.nextDestination = remove_from_array(destinations) - self.gradient = delta y to destination / delta x to destination - self.angle = atan(self.gradient) - self.cosAngle = cos(self.angle) - self.sinAngle = sin(self.angle) } move() { - get movement allocation for this turn - if self.nextDestination not valid - - getNextDestination() - while(nextDestination valid) && (movement allocation remains) { - - find xStep and yStep using movement allocation and sinAngle/cosAngle calculated for current self.nextDestination - - if current position + xStep crosses the destination - - - find x movement remaining after self.nextDestination reached - - - calculate remaining direct path movement allocation (xStep remaining / cosAngle) - - - make self.position equal to self.nextDestination - - else - - - apply xStep and yStep to current position - } - round squad's float coordinates to integer screen coordinates - draw squad image on map } That's simplified of course, stuff like sign needs to be tweaked to ensure movement is in the right direction. If trig is the best way to do it then lookup tables can be used or maybe it doesn't matter on modern devices like it used to. Suggestions for a better way to do it? an update - iPhone has zero issues with trig and tracking tens of positions and tracks implemented as described above and it draws in floats anyway. The Bresenham method is more efficient, trig is more precise. If I was to use integer Bresenham I would want to multiply by ten or so to maintain a little more positional accuracy to benefit collisions/terrain detection.

    Read the article

  • UINavigation Bar while moving view for writing in a textfield

    - by ObiWanKeNerd
    i'm using this code to move the view when i'm about to type on a textfield, otherwise the keyboard may cover the textfield if it's in the lower side of the screen. I would like to know if there is a way to maintain the UINavigation Bar in it's place, because with this code the bar will move with all the view outside the screen, becoming untouchable until i end editing the textfield (closing the keyboard). CGFloat animatedDistance; static const CGFloat KEYBOARD_ANIMATION_DURATION = 0.3; static const CGFloat MINIMUM_SCROLL_FRACTION = 0.2; static const CGFloat MAXIMUM_SCROLL_FRACTION = 0.8; static const CGFloat PORTRAIT_KEYBOARD_HEIGHT = 216; static const CGFloat LANDSCAPE_KEYBOARD_HEIGHT = 162; - (void)textFieldDidBeginEditing:(UITextField *)textField { CGRect textFieldRect = [self.view.window convertRect:textField.bounds fromView:textField]; CGRect viewRect = [self.view.window convertRect:self.view.bounds fromView:self.view]; CGFloat midline = textFieldRect.origin.y + 0.5 * textFieldRect.size.height; CGFloat numerator = midline - viewRect.origin.y - MINIMUM_SCROLL_FRACTION * viewRect.size.height; CGFloat denominator = (MAXIMUM_SCROLL_FRACTION - MINIMUM_SCROLL_FRACTION) * viewRect.size.height; CGFloat heightFraction = numerator / denominator; if (heightFraction < 0.0) { heightFraction = 0.0; } else if (heightFraction > 1.0) { heightFraction = 1.0; } UIInterfaceOrientation orientation = [[UIApplication sharedApplication] statusBarOrientation]; if (orientation == UIInterfaceOrientationPortrait || orientation == UIInterfaceOrientationPortraitUpsideDown) { animatedDistance = floor(PORTRAIT_KEYBOARD_HEIGHT * heightFraction); } else { animatedDistance = floor(LANDSCAPE_KEYBOARD_HEIGHT * heightFraction); } CGRect viewFrame = self.view.frame; viewFrame.origin.y -= animatedDistance; [UIView beginAnimations:nil context:NULL]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationDuration:KEYBOARD_ANIMATION_DURATION]; [self.view setFrame:viewFrame]; [UIView commitAnimations]; } - (void)textFieldDidEndEditing:(UITextField *)textField { CGRect viewFrame = self.view.frame; viewFrame.origin.y += animatedDistance; [UIView beginAnimations:nil context:NULL]; [UIView setAnimationBeginsFromCurrentState:YES]; [UIView setAnimationDuration:KEYBOARD_ANIMATION_DURATION]; [self.view setFrame:viewFrame]; [UIView commitAnimations]; } Thanks in advance!

    Read the article

  • Thin & Sinatra not taking port

    - by NekoNova
    I'm having problems settig up my application using Thin and Sinatra. I have created a development-config.ru file that contains the following settings: # This is a rack configuration file to fire up the Sinatra application. # This allows better control and configuration as we are using the modular # approach here for controlling our application. # # Extend the Ruby load path with the root of the API and the lib folder # so that we can automatically include all our own custom classes. This makes # the requiring of files a bit cleaner and easier to maintain. # This is basically what rails does as well. # We also store the root of the API in the ENV settings to ensure we have # always access to the root of the API when building paths. ENV['API_ROOT'] = File.dirname(__FILE__) $:.unshift ENV['API_ROOT'] $:.unshift File.expand_path(File.join(ENV['API_ROOT'], 'lib')) $:.unshift File.expand_path(File.join(ENV['API_ROOT'], 'db')) # Now we can require all the gems used for the entire API by simpling requiring # them here. We can also include the classes that we have defined inside the lib # folder. require 'rubygems' require 'bundler' # Run Bundler to setup our gems properly. This will install all the missing gems on # the system and ensure that the deployment environment is ready to run. Bundler.require # To make the loading easier for the application, we will now automatically load all # models that have been defined inside the lib folder. This ensures that we do not need # to load them anymore anywhere else in our application, as the models will be known to # ruby everywhere. Dir.glob(File.join(ENV['API_ROOT'], 'lib', '**', '*.rb')).each{|file| require file} # Now we will configure the Sinatra application so that we can fire up the entire API. # This requires some detailed settings like whether logging is allowed, the port to be # used and some folder locations. require 'sinatra' require 'app' set :logging, true set :dump_errors, true set :port, 3001 set :views, "#{ENV['API_ROOT']}/views" set :public_folder, "#{ENV['API_ROOT']}/public" set :environment, :test # Start up the Sinatra application with all the settings that we have defined. run App.new This is based upon the information I found on the Sinatra website. However, the problem is that I cannot get the application running on port 3001. If I use thin start -R development-config.ru it runs on port 3000. If I use rackup config-development.ru it runs on port 9696. However I never see Sinatra kick in or run over port 3000. My application looks like this: # Author : Arne De Herdt # Email : # This is the actuall application that will be running under Sinatra # to serve the requests for the billing middleware API. # We use the modular approach here to allow control when deploying # the application using Capistrano. require 'sinatra/base' require 'logger' require 'savon' require 'billcrux' class App < Sinatra::Base # This action responds to POST requests on the URI '/billcrux/register' # and is responsible for handeling registration requests with the # BillCrux payment system. # The post "/billcrux/register" do # do stuff end end Can someone tell me what I am doing wrong?

    Read the article

  • Best of both worlds: browser and desktop game?

    - by Ricket
    When considering a platform for a game, I've decided on multi-platform (Win/Lin/Mac) but can't make up my mind as far as browser vs. desktop. As I'm not all too far in development, and now having second thoughts, I'd like your opinion! Browser-based games using Java applets: market penetration is reasonably high (for version 6, it's somewhere around 60% I believe?) using JOGL, 3D performance/quality is decent; certainly good enough to render the crappy 3D graphics that I make there's the (small?) possibility of porting something to Android great for an audience of gamers who switch computers often; can sit down at any computer, load a webpage and play it also great for casual gamers or less knowledgeable gamers who are quite happy with playing games in a browser but don't want to install more things to their computer written in a high-level language which I am more familiar with than C++ - but at the same time, I would like to improve my skills with C++ as it is probably where I am headed in the game industry once I get out of school... easier update process: reload the page. Desktop games using good ol' C++ and OpenGL 100% market penetration, assuming complete cross-platform; however, that number reduces when you consider how many people will go through downloading and installing an executable compared to just browsing to a webpage and hitting "yes" to a security warning. more trouble to maintain the cross-platform; but again, for learning purposes I would embrace the challenge and the knowledge I would gain better performance all around true full screen, whereas browser games often struggle with smooth full screen graphics (especially on Linux, in my experience) can take advantage of distribution platforms such as Steam more likely to be considered a "real" game, whereas browser and Java games are often dismissed as not being real games and therefore not played by "hardcore gamers" installer can be large; don't have to worry so much about download times Is there a way to have the best of both worlds? I love Java applets, but I also really like the reasons to write a desktop game. I don't want to constantly port everything between a Java applet project and a C++ project; that would be twice the work! Unity chose to write their own web player plugin. I don't like this, because I am one of the people that will not install their web player for anything, and I don't see myself being able to convince my audience to install a browser plugin. What are my options? Are there other examples out there besides Unity, of games that have browser and desktop versions? Did I leave out anything in the pro/con lists above?

    Read the article

  • Renaming nodes and values with xslt

    - by T.K.
    Hello world, I'm new to xslt, and have a task that I'm not really sure where to go with. I want to rename nodes, but maintain the format all node declarations. In the actual context I'll be applying this to, I'll be doing a series of renames like this, but for the sake of brevity, the sample I've written up only involves renaming one node. I am using XSL 1.0. Input: <variables> <var> <RENAME> a </RENAME> </var> <var RENAME='b'/> <var> <DO_NOT_TOUCH> c </DO_NOT_TOUCH> </var> <var DO_NOT_TOUCH='d'/> </variables> Desired Output: <variables> <var> <DONE> a </DONE> </var> <var DONE='b'/> <var> <DO_NOT_TOUCH> c </DO_NOT_TOUCH> </var> <var DO_NOT_TOUCH='d'/> </variables> My xslt: <xsl:template match="RENAME"> <RENAMED> <xsl:apply-templates select="@*|node()"/> </RENAMED> </xsl:template> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> Current Output <variables> <var> <RENAMED> a </RENAMED> </var> <var RENAME="b"> </var> <var> <DO_NOT_TOUCH> c </DO_NOT_TOUCH> </var> <var DO_NOT_TOUCH="d"> </var> </variables>

    Read the article

  • Calling cdecl Functions That Have Different Number of Arguments

    - by KlaxSmashing
    I have functions that I wish to call based on some input. Each function has different number of arguments. In other words, if (strcmp(str, "funcA") == 0) funcA(a, b, c); else if (strcmp(str, "funcB") == 0) funcB(d); else if (strcmp(str, "funcC") == 0) funcC(f, g); This is a bit bulky and hard to maintain. Ideally, these are variadic functions (e.g., printf-style) and can use varargs. But they are not. So exploiting the cdecl calling convention, I am stuffing the stack via a struct full of parameters. I'm wondering if there's a better way to do it. Note that this is strictly for in-house (e.g., simple tools, unit tests, etc.) and will not be used for any production code that might be subjected to malicious attacks. Example: #include <stdio.h> typedef struct __params { unsigned char* a; unsigned char* b; unsigned char* c; } params; int funcA(int a, int b) { printf("a = %d, b = %d\n", a, b); return a; } int funcB(int a, int b, const char* c) { printf("a = %d, b = %d, c = %s\n", a, b, c); return b; } int funcC(int* a) { printf("a = %d\n", *a); *a *= 2; return 0; } typedef int (*f)(params); int main(int argc, char**argv) { int val; int tmp; params myParams; f myFuncA = (f)funcA; f myFuncB = (f)funcB; f myFuncC = (f)funcC; myParams.a = (unsigned char*)100; myParams.b = (unsigned char*)200; val = myFuncA(myParams); printf("val = %d\n", val); myParams.c = (unsigned char*)"This is a test"; val = myFuncB(myParams); printf("val = %d\n", val); tmp = 300; myParams.a = (unsigned char*)&tmp; val = myFuncC(myParams); printf("a = %d, val = %d\n", tmp, val); return 0; } Output: gcc -o func func.c ./func a = 100, b = 200 val = 100 a = 100, b = 200, c = This is a test val = 200 a = 300 a = 600, val = 0

    Read the article

  • Migrating from hand-written persistence layer to ORM

    - by Sergey Mikhanov
    Hi community, We are currently evaluating options for migrating from hand-written persistence layer to ORM. We have a bunch of legacy persistent objects (~200), that implement simple interface like this: interface JDBC { public long getId(); public void setId(long id); public void retrieve(); public void setDataSource(DataSource ds); } When retrieve() is called, object populates itself by issuing handwritten SQL queries to the connection provided using the ID it received in the setter (this usually is the only parameter to the query). It manages its statements, result sets, etc itself. Some of the objects have special flavors of retrive() method, like retrieveByName(), in this case a different SQL is issued. Queries could be quite complex, we often join several tables to populate the sets representing relations to other objects, sometimes join queries are issued on-demand in the specific getter (lazy loading). So basically, we have implemented most of the ORM's functionality manually. The reason for that was performance. We have very strong requirements for speed, and back in 2005 (when this code was written) performance tests has shown that none of mainstream ORMs were that fast as hand-written SQL. The problems we are facing now that make us think of ORM are: Most of the paths in this code are well-tested and are stable. However, some rarely-used code is prone to result set and connection leaks that are very hard to detect We are currently squeezing some additional performance by adding caching to our persistence layer and it's a huge pain to maintain the cached objects manually in this setup Support of this code when DB schema changes is a big problem. I am looking for an advice on what could be the best alternative for us. As far as I know, ORMs has advanced in last 5 years, so it might be that now there's one that offers an acceptable performance. As I see this issue, we need to address those points: Find some way to reuse at least some of the written SQL to express mappings Have the possibility to issue native SQL queries without the necessity to manually decompose their results (i.e. avoid manual rs.getInt(42) as they are very sensitive to schema changes) Add a non-intrusive caching layer Keep the performance figures. Is there any ORM framework you could recommend with regards to that?

    Read the article

  • Correct usage of socket_select().

    - by Mark Tomlin
    What is the correct way to use socket_select within PHP to send and receive data? I have a connection to the server that allows for both TCP & UDP packet connections, I am utilizing both. Within these connections I'm both sending and receiving packets on the same port, but the TCP packet will be sent on one port (29999) and UDP will be sent on another port (30000). The transmission type will be that of AF_INET. The IP address will be loopback 127.0.0.1. I have many questions on how to create a socket connection within this scenario. For example, is it better to use socket_create_pair to make the connection, or use just socket_create followed by socket_connect, and then implement socket_select? There is a chance that no data will be sent from the server to the client, and it is up to the client to maintain the connection. This will be done by utilizing the time out function within the socket_select call. Should no data be sent within the time limit, the socket_select function will break and a keep alive packet can then be sent. The following script is of the client. // Create $TCP = socket_create(AF_INET, SOCK_STREAM, SOL_TCP); $UDP = socket_create(AF_INET, SOCK_DGRAM, SOL_UDP); // Misc $isAlive = TRUE; $UDPPort = 30000; define('ISP_ISI', 1); // Connect socket_connect($TCP, '127.0.0.1', 29999); socket_connect($UDP, '127.0.0.1', $UDPPort); // Construct Parameters $recv = array($TCP, $UDP); $null = NULL; // Make The Packet to Send. $packet = pack('CCCxSSxCSa16a16', 44, ISP_ISI, 1, $UDPPort, 0, '!', 0, 'AdminPass', 'SocketSelect'); // Send ISI (InSim Init) Packet socket_write($TCP, $packet); /* Main Program Loop */ while ($isAlive == TRUE) { // Socket Select $sock = socket_select($recv, $null, $null, 5); // Check Status if ($sock === FALSE) $isAlive = FALSE; # Error else if ($sock > 0) # How does one check to find what socket changed? else # Something else happed, don't know what as it's not in the documentation, Could this be our timeout getting tripped? }

    Read the article

  • Trouble passing a string as a SQLite ExecSQL command

    - by Hackbrew
    I keep getting the ERROR: near "PassWord": syntax error when trying to execute the ExecSQL() statement. The command looks good in the output of the text file. In fact, I copied & pasted the command directly into SQLite Database Browser and the commend executed properly. Here's the code that's producing the error: procedure TForm1.Button1Click(Sender: TObject); var i, iFieldSize: integer; sFieldName, sFieldType, sFieldList, sExecSQL: String; names: TStringList; f1: Textfile; begin //Open Source table - Table1 has 8 fields but has only two different field types ftString and Boolean Table1.TableName:= 'PWFile'; Table1.Open; //FDConnection1.ExecSQL('drop table PWFile'); sFieldList := ''; names := TStringList.Create; for i := 0 to Table1.FieldCount - 1 do begin sFieldName := Table1.FieldDefList.FieldDefs[i].Name; sFieldType := GetEnumName(TypeInfo(TFieldType),ord(Table1.FieldDefList.FieldDefs[i].DataType)); iFieldSize := Table1.FieldDefList.FieldDefs[i].Size; if sFieldType = 'ftString' then sFieldType := 'NVARCHAR' + '(' + IntToStr(iFieldSize) + ')'; if sFieldType = 'ftBoolean' then sFieldType := 'INTEGER'; names.Add(sFieldName + ' ' + sFieldType); if sFieldList = '' then sFieldList := sFieldName + ' ' + sFieldType else sFieldList := sFieldList + ', ' + sFieldName + ' ' + sFieldType; end; ListBox1.Items.Add(sFieldList); sExecSQL := 'create table IF NOT EXISTS PWFile (' + sFieldList + ')'; // 08/18/2014 - Entered this to log the SQLite FDConnection1.ExecSQL Command to a file AssignFile(f1, 'C:\Users\Test User\Documents\SQLite_Command.txt'); Rewrite(f1); Writeln(f1, sExecSQL); { insert code here that would require a Flush before closing the file } Flush(f1); { ensures that the text was actually written to file } CloseFile(f1); FDConnection1.ExecSQL(sFieldList); Table1.Close; end; Here's the actual command that gets executed: create table IF NOT EXISTS PWFile (PassWord NVARCHAR(10), PassName NVARCHAR(10), Dept NVARCHAR(10), Active NVARCHAR(1), Admin INTEGER, Shred INTEGER, Reports INTEGER, Maintain INTEGER)

    Read the article

  • Thoughts on streamlining multiple .Net apps

    - by John Virgolino
    We have a series of ASP.Net applications that have been written over the course of 8 years. Mostly in the first 3-4 years. They have been running quite well with little maintenance, but new functionality is being requested and we are running into IDE and platform issues. The apps were written in .Net 1.x and 2.x and run in separate spaces but are presented as a single suite of applications which use a common navigation toolbar (implemented as a user control). Every time we want to add something to a menu in the nav we have to modify it in all the apps which is a pain. Also, the various versions of Crystal reports and that we used tables to organize the visual elements and we end up with a mess, especially with all the multi-platform .Net versions running. We need to streamline the suite of apps and make it easier to add on new apps without a hassle. We also need to bring all these apps under one .Net platform and IDE. In addition, there is a WordPress blog styled to match the style of the application suite "integrated" into the UI and a link to a MediaWiki Wiki application as well. My current thinking is to use an open source content management system (CMS) like Joomla (PHP based unfortunately, but it works well) as the user interface framework for style templating and menu management. Joomla's article management would allow us to migrate the Wiki content into articles which could be published without interfering with the .Net apps. Then essentially use an IFrame within an "article" to "host" the .Net application, then... Upgrade the .Net apps to VS2010, strip out all the common header/footer controls and migrate the styles to use the style sheets used in the CMS. As I write this, I certainly realize this is a lot of work and there are optimization issues which this may cause as well as using IFrames seems a bit like cheating and I've read about issues with IFrames. I know that we could use .Net application styling, but it seems like a lot more work (not sure really). Also, the use of a CMS to handle the blog and wiki also seems appealing, unless there is a .Net CMS out there that can handle all of these requirements. Given this information, I am looking to know if I am totally going in the wrong direction? We tried to use open source and integrate it over time, but not this has become hard to maintain. Am I not aware of some technology out there that will meet our requirements? Did we do this right and should we just focus on getting the .Net streamlined? I understand that no matter what we do, it's going to be a lot of work. The communities considerable experience would be helpful. Thanks!! PS - A complete rewrite is not an option.

    Read the article

  • Representing xml through a single class

    - by Charles
    I am trying to abstract away the difficulties of configuring an application that we use. This application takes a xml configuration file and it can be a bit bothersome to manually edit this file, especially when we are trying to setup some automatic testing scenarios. I am finding that reading xml is nice, pretty easy, you get a network of element nodes that you can just go through and build your structures quite nicely. However I am slowly finding that the reverse is not quite so nice. I want to be able to build a xml configuration file through a single easy to use interface and because xml is composed of a system of nodes I am having a lot of struggle trying to maintain the 'easy' part. Does anyone know of any examples or samples that easily and intuitively build xml files without declaring a bunch of element type classes and expect the user to build the network themselves? For example if my desired xml output is like so <cook version="1.1"> <recipe name="chocolate chip cookie"> <ingredients> <ingredient name="flour" amount="2" units="cups"/> <ingredient name="eggs" amount="2" units="" /> <ingredient name="cooking chocolate" amount="5" units="cups" /> </ingredients> <directions> <direction name="step 1">Preheat oven</direction> <direction name="step 2">Mix flour, egg, and chocolate</direction> <direction name="step 2">bake</direction> </directions> </recipe> <recipe name="hot dog"> ... How would I go about designing a class to build that network of elements and make one easy to use interface for creating recipes? Right now I have a recipe object, an ingredient object, and a direction object. The user must make each one, set the attributes in the class and attach them to the root object which assembles the xml elements and outputs the formatted xml. Its not very pretty and I just know there has to be a better way. I am using python so bonus points for pythonic solutions

    Read the article

  • How do you select form elements in JQuery based upon an html table?

    - by Swoop
    I am working on some ASP.NET web forms which involves some dynamic generation, and I need to add some onClick helpers on the client side. I have a basic outline of something working, except for one huge problem. There are multiple HTML tables, each generated by a different ASP.NET web control. Each table can contain overlapping field names, which is causing a problem with my JQuery click event handlers. The click event handler is linking to unintended form fields in addition to the intended form field. I have provided a simplified sample version of the code below. This code is trying to set the value of textbox box1 when a particular radiobutton is selected in the table with id=thing1. Obviously, the jquery code will be triggered for the form fields in both tables. The tables are dynamically added to the webpage based upon different conditions. It is possible that no tables will be loaded, only 1 table, or both tables might load. In the future, other tables could be added. Each table comes from a different .net web control. Other than renaming the form fields to make sure they are unique across all user controls, is there a way to have JQuery act only on the intended form fields? In other words, could the table ID be incorporated into the JQuery code in a manner that does not become a nightmare to maintain later? <script> $(document).ready(function() { $("[id$=radio1_0]").click(function() { $("[id$=box1]").attr("value", ""); }); $("[id$=radio1_1]").click(function() { $("[id$=box1]").attr("value", "N/A"); }); </script> <table id="thing1"> <tr><td> <radiobuttonlist id="radio1"/> <listitem>yes</listitem> <listitem>no</listitem> </td></tr> <tr><td> <textbox id="box1"/> </td></tr> </table> <table id="thing2"> <tr><td> <radiobuttonlist id="radio1"/> <listitem>yes</listitem> <listitem>no</listitem> </td></tr> <tr><td> <textbox id="box1"/> </tr></td> </table>

    Read the article

  • Blit Queue Optimization Algorithm

    - by martona
    I'm looking to implement a module that manages a blit queue. There's a single surface, and portions of this surface (bounded by rectangles) are copied to elsewhere within the surface: add_blt(rect src, point dst); There can be any number of operations posted, in order, to the queue. Eventually the user of the queue will stop posting blits, and ask for an optimal set of operations to actually perform on the surface. The task of the module is to ensure that no pixel is copied unnecessarily. This gets tricky because of overlaps of course. A blit could re-blit a previously copied pixel. Ideally blit operations would be subdivided in the optimization phase in such a way that every block goes to its final place with a single operation. It's tricky but not impossible to put this together. I'm just trying to not reinvent the wheel. I looked around on the 'net, and the only thing I found was the SDL_BlitPool Library which assumes that the source surface differs from the destination. It also does a lot of grunt work, seemingly unnecessarily: regions and similar building blocks are a given. I'm looking for something higher-level. Of course, I'm not going to look a gift horse in the mouth, and I also don't mind doing actual work... If someone can come forward with a basic idea that makes this problem seem less complex than it does right now, that'd be awesome too. EDIT: Thinking about aaronasterling's answer... could this work? Implement customized region handler code that can maintain metadata for every rectangle it contains. When the region handler splits up a rectangle, it will automatically associate the metadata of this rectangle with the resulting sub-rectangles. When the optimization run starts, create an empty region handled by the above customized code, call this the master region Iterate through the blt queue, and for every entry: Let srcrect be the source rectangle for the blt beng examined Get the intersection of srcrect and master region into temp region Remove temp region from master region, so master region no longer covers temp region Promote srcrect to a region (srcrgn) and subtract temp region from it Offset temp region and srcrgn with the vector of the current blt: their union will cover the destination area of the current blt Add to master region all rects in temp region, retaining the original source metadata (step one of adding the current blt to the master region) Add to master region all rects in srcrgn, adding the source information for the current blt (step two of adding the current blt to the master region) Optimize master region by checking if adjacent sub-rectangles that are merge candidates have the same metadata. Two sub-rectangles are merge candidates if (r1.x1 == r2.x1 && r1.x2 == r2.x2) | (r1.y1 == r2.y1 && r1.y2 == r2.y2). If yes, combine them. Enumerate master region's sub-rectangles. Every rectangle returned is an optimized blt operation destination. The associated metadata is the blt operation`s source.

    Read the article

  • Is there a programming language with be semantics close to English ?

    - by ivo s
    Most languages allow to 'tweek' to certain extend parts of the syntax (C++,C#) and/or semantics that you will be using in your code (Katahdin, lua). But I have not heard of a language that can just completely define how your code will look like. So isn't there some language which already exists that has such capabilities to override all syntax & define semantics ? Example of what I want to do is basically from the C# code below: foreach(Fruit fruit in Fruits) { if(fruit is Apple) { fruit.Price = fruit.Price/2; } } I want do be able to to write the above code in my perfect language like this: Check if any fruits are Macintosh apples and discount the price by 50%. The advantages that come to my mind looking from a coder's perspective in this "imaginary" language are: It's very clear what is going on (self descriptive) - it's plain English after all even kid would understand my program Hides all complexities which I have to write in C#. But why should I care to learn that if statements, arithmetic operators etc since there are already implemented The disadvantages that I see for a coder who will maintain this program are: Maybe you would express this program differently from me so you may not get all the information that I've expressed in my sentence Programs can be quite verbose and hard to debug but if possible to even proximate this type of syntax above maybe more people would start programming right? That would be amazing I think. I can go to work and just write an essay to draw a square on a winform like this: Create a form called MyGreetingForm. Draw a square with in the middle of MyGreetingFormwith a side of 100 points. In the middle of the square write "Hello! Click here to continue" in Arial font. In the above code the parser must basically guess that I want to use the unnamed square from the previous sentence, it'd be hard to write such a smart parser I guess, yet it's so simple what I want to do. If the user clicks on square in the middle of MyGreetingForm show MyMainForm. In the above code 'basically' the compiler must: 1)generate an event handler 2) check if there is any square in the middle of the form and if there is - 3) hide the form and show another form It looks very hard to do but it doesn't look impossible IMO to me at least approximate this (I can personally generate a parser to perform the 3 steps above np & it's basically the same that it has to do any way when you add even in c# a.MyEvent=+handler; so I don't see a problem here) so I'm thinking maybe somebody already did something like this ? Or is there some practical burden of complexity to create such a 'essay style' programming language which I can't see ? I mean what's the worse that can happen if the parser is not that good? - your program will crash so you have to re-word it:)

    Read the article

  • OpenGL ES Polygon with Normals rendering (Note the 'ES!')

    - by MarqueIV
    Ok... imagine I have a relatively simple solid that has six distinct normals but actually has close to 48 faces (8 faces per direction) and there are a LOT of shared vertices between faces. What's the most efficient way to render that in OpenGL? I know I can place the vertices in an array, then use an index array to render them, but I have to keep breaking my rendering steps down to change the normals (i.e. set normal 1... render 8 faces... set normal 2... render 8 faces, etc.) Because of that I have to maintain an array of index arrays... one for each normal! Not good! The other way I can do it is to use separate normal and vertex arrays (or even interleave them) but that means I need to have a one-to-one ratio for normals to vertices and that means the normals would be duplicated 8 times more than they need to be! On something with a spherical or even curved surface, every normal most likely is different, but for this, it really seems like a waste of memory. In a perfect world I'd like to have my vertex and normal arrays have different lengths, then when I go to draw my triangles or quads To specify the index to each array for that vertex. Now the OBJ file format lets you specify exactly that... a vertex array and a normal array of different lengths, then when you specify the face you are rendering, you specify a vertex and a normal index (as well as a UV coord if you are using textures too) which seems like the perfect solution! 48 vertices but only 8 normals, then pairs of indexes defining the shapes' faces. But I'm not sure how to render that in OpenGL ES (again, note the 'ES'.) Currently I have to 'denormalize' (sorry for the SQL pun there) the normals back to a 1-to-1 with the vertex array, then render. Just wastes memory to me. Anyone help? I hope I'm missing something very simple here. Mark

    Read the article

  • C# some sort of plugin system

    - by nLL
    Hi, I am a mobile web developer and trying to monetize my traffic with mobile ad services and i have a problem. First of all to get most of out of your ads you usually need to do server side request to advert company's servers and there are quite few ad services. Problem starts when you want to use them in one site. All have different approaches to server side calls and trying to maintain and implement those ad codes becomes pain after a while. So I decided to write a class system where i can simply create methods for every company and upload it to my site. So far i have public Advert class public AdPublisher class with GetAd method that returns an Advert public Adservice class that has Service names as enum I also have converted server request codes of all ad services i use to classes. It works ok but I want to be able to create an ad service class upload it so that asp.net app can import/recognize it automatically like a plugin system. As I am new to .net I have no idea where to start or how to do it. To make thing clear here are my classes namespace Mobile.Publisher { public class AdPublisher { public AdPublisher() { IsTest = false; } public bool IsTest { get; set; } public HttpRequest CurrentVisitorRequestInfo { get; set; } public Advert GetAd(AdService service) { Advert returnAd = new Advert(); returnAd.Success = true; if (this.CurrentVisitorRequestInfo == null) { throw new Exception("CurrentVisitorRequestInfo for AdPublisher not set!"); } if (service == null) { throw new Exception("AdService not set!"); } if (service.ServiceName == AdServices.Admob) { returnAd.ReturnedAd = AdmobAds("000000"); } return returnAd; } } public enum AdServices { Admob, ServiceB, ServiceC } public class Advert { public bool Success { get; set; } public string ReturnedAd { get; set; } } public partial class AdService { public AdServices ServiceName { get; set; } public string PublisherOrSiteId { get; set; } public string ZoneOrChannelId { get; set; } } private string AdmobAds(string publisherid) { //snip return "test" } } Basically i want to be able to add another ad service and code like private string AdmobAds(string publisherid){ } So that it can be imported and recognised as ad service. I hope i was clear enough

    Read the article

  • Can I have a workspace that is both a git workspace and a svn workspace?

    - by Troy
    I have checked out now a local working copy of a codebase that lives in an svn repo. It's a big Java project that I use Eclipse to develop in. Eclipse of course builds everything on the fly, in it's own way with all the binaries ending up in [project root]/bin. That's perfectly fine with me, for development, but when the build runs on the build server, it looks quite a lot different (maven build, binaries end up in a different directory structure, etc). Sometimes I need to recreate the build server environment on my local development system to debug the build or what have you, so I usually end up downloading an entirely new working copy into a new workspace and running the build from there (prevents cluttering my development workspace with all the build artifacts and dirtying up the working copy). Of course sometimes I'm interested in running the full build on code that I don't want to check in yet, so I will manually copy over the "development" workspace onto the "build" workspace. Besides taking a lot of extra time copying a lot of files that I don't actually need (just overlaying the new over the old), this also screws up my svn metadata, meaning that I can't check in changes from that "build workspace" working copy, and I often end up having to re-download the code to get it back into a known state. So I'm thinking I make my svn working copy a local git repo, then "check out" the in-development code from the svn working copy/git master, into the local build workspace. Then I can build, revert my changes, have all the advantages of a version controlled working copy in the build workspace. Then if I need to make changes to the build, push those back into the git master (which is also a svn working copy), then check them into the main svn repo. |-------------| |main svn repo| <------- |---------------------| |-------------| |svn working copy | <------- |--------------------| | (svn dev workspace/ | | non-svn-versioned | | git master) | | build workspace | |---------------------| | (git working copy) | |--------------------| Just switching everything to git would obviously be better, but, big company, too many people using svn, too costly to change everything, etc. We're stuck with svn as the main repo for now. BTW, I know there is a maven plugin for Eclipse and everything, I'm mainly interested to know if there is a way to maintain a workspace that is both a git working copy and an svn working copy. Actually any distributed version control system would probably work (hg possibly?). Advice? How does everybody else handle this situation of having a to manage both a "development" build process and a "production" build process?

    Read the article

  • Add console.profile statements to JavaScript/jQuery code on the fly.

    - by novogeek
    Hi folks, We have a thick client app using jQuery heavily and want to profile the performance of the code using firebug's console.profile API. The problem is, I don't want to change the code to write the profile statements. Take this example: var search=function(){ this.init=function(){ console.log('init'); } this.ajax=function(){ console.log('ajax'); //make ajax call using $.ajax and do some DOM manipulations here.. } this.cache=function(){ console.log('cache'); } } var instance=new search(); instance.ajax(); I want to profile my instance.ajax method, but I dont want to add profile statements in the code, as that makes it difficult to maintain the code. I'm trying to override the methods using closures, like this: http://www.novogeek.com/post/2010/02/27/Overriding-jQueryJavaScript-functions-using-closures.aspx but am not very sure how I can achieve. Any pointers on this? I think this would help many big projects to profile the code easily without a big change in code. Here is the idea. Just run the below code in firebug console, to know what I'm trying to achieve. var search=function(){ this.init=function(){ console.log('init'); } this.ajax=function(){ console.log('ajax'); //make ajax call using $.ajax and do some DOM manipulations here.. } this.cache=function(){ console.log('cache'); } } var instance=new search(); $.each(instance, function(functionName, functionBody){ (function(){ var dup=functionBody functionBody=function(){ console.log('modifying the old function: ',functionName); console.profile(functionName); dup.apply(this,arguments); console.profileEnd(functionName); } })(); console.log(functionName, '::', functionBody()); }); Now what I need is, if i say instance.ajax(), I want the new ajax() method to be called, along with the console.profile statements. Hope I'm clear with the requirement. Please improvise the above code. Regards, Krishna, http://www.novogeek.com

    Read the article

  • Is there a way to efficiently yield every file in a directory containing millions of files?

    - by Josh Smeaton
    I'm aware of os.listdir, but as far as I can gather, that gets all the filenames in a directory into memory, and then returns the list. What I want, is a way to yield a filename, work on it, and then yield the next one, without reading them all into memory. Is there any way to do this? I worry about the case where filenames change, new files are added, and files are deleted using such a method. Some iterators prevent you from modifying the collection during iteration, essentially by taking a snapshot of the state of the collection at the beginning, and comparing that state on each move operation. If there is an iterator capable of yielding filenames from a path, does it raise an error if there are filesystem changes (add, remove, rename files within the iterated directory) which modify the collection? There could potentially be a few cases that could cause the iterator to fail, and it all depends on how the iterator maintains state. Using S.Lotts example: filea.txt fileb.txt filec.txt Iterator yields filea.txt. During processing, filea.txt is renamed to filey.txt and fileb.txt is renamed to filez.txt. When the iterator attempts to get the next file, if it were to use the filename filea.txt to find it's current position in order to find the next file and filea.txt is not there, what would happen? It may not be able to recover it's position in the collection. Similarly, if the iterator were to fetch fileb.txt when yielding filea.txt, it could look up the position of fileb.txt, fail, and produce an error. If the iterator instead was able to somehow maintain an index dir.get_file(0), then maintaining positional state would not be affected, but some files could be missed, as their indexes could be moved to an index 'behind' the iterator. This is all theoretical of course, since there appears to be no built-in (python) way of iterating over the files in a directory. There are some great answers below, however, that solve the problem by using queues and notifications. Edit: The OS of concern is Redhat. My use case is this: Process A is continuously writing files to a storage location. Process B (the one I'm writing), will be iterating over these files, doing some processing based on the filename, and moving the files to another location. Edit: Definition of valid: Adjective 1. Well grounded or justifiable, pertinent. (Sorry S.Lott, I couldn't resist). I've edited the paragraph in question above.

    Read the article

  • Problems with starting windows service on windows xp SP3

    - by Michiel Peeters
    I'm currently facing a problem which I can not resolve and I really don't know what to do anymore. When I'm trying to start the service I receive the message: "The service is started but again also stopped, this because that some of the services will stop if they have nothing to do, for example the performance logs and the alerts service". I've looked into the Windows Logs but nothing is written there which could describe why my service is all the time stopping. I've also tried to fire the windows service via the command prompt which gives me the message: "The service is not started, but the service didn't return any faults.". I've tried to remove all keys which references to my service, which didn't resolve the issue. I've searched on google (maybe not good enough) to find an answer but I didn't found any. I did found some websites which describes what I could do, but all of these suggestions didn't work. This is kinda ** because I do not know where to look. I do not have any error message, i do not have any id which i can use to search on. I really don't know where to start and I hope you guys can help me on this one. Detailed explanation about the windows service OS: Windows XP SP3 .Net Framework: .Net 4.0 Client Profile Language: C# Development environment: Visual Studio 2010 Professional (but Visual Studio 2012 RC is installed) Communications: WCF (Named Pipes), WCF (BasicHTTPBinding) Named Pipes: I have chosen for this solution because I wanted to communicate from a windows service to a windows form application. It worked now for quite some time but suddenly my windows service shuts it self down and I couldn't restart it anymore. There are two named pipes services implemented: An event service which will send any notification to the windows form application and an management service which gives my windows form application the possibility to maintain my windows service. BasicHTTPBinding: The basic http binding makes the connection to a central server. This connection is then used for streaming information from the client to the server. I do not know which additional information you will need, but if you guys need something then I'll try to give it as detailed as possible. Thank you in advance.

    Read the article

  • disk-to-disk backup without costly backup redundancy?

    - by AaronLS
    A good backup strategy involves a combination of 1) disconnected backups/snapshots that will not be affected by bugs, viruses, and/or security breaches 2) geographically distributed backups to protect against local disasters 3) testing backups to ensure that they can be restored as needed Generally I take an onsite backup daily, and an offsite backup weekly, and do test restores periodically. In the rare circumstance that I need to restore files, I do some from the local backup. Should a catastrophic event destroy the servers and local backups, then the offsite weekly tape backup would be used to restore the files. I don't need multiple offsite backups with redundancy. I ALREADY HAVE REDUNDANCY THROUGH THE USE OF BOTH LOCAL AND REMOTE BACKUPS. I have recovery blocks and par files with the backups, so I already have protection against a small percentage of corrupt bits. I perform test restores to ensure the backups function properly. Should the remote backups experience a dataloss, I can replace them with one of the local backups. There are historical offsite backups as well, so if a dataloss was not noticed for a few weeks(such as a bug/security breach/virus), the data could be restored from an older backup. By doing this, the only scenario that poses a risk to complete data loss would be one where both the local, remote, and servers all experienced a data loss in the same time period. I'm willing to risk that happening since the odds of that trifecta negligibly small, and the data isn't THAT valuable to me. So I hope I have emphasized that I don't need redundancy in my offsite backups because I have covered all the bases. I know this exact technique is employed by numerous businesses. Of course there are some that take multiple offsite backups, because the data is so incredibly valuable that they don't even want to risk that trifecta disaster, but in the majority of cases the trifecta disaster is an accepted risk. I HAD TO COVER ALL THIS BECAUSE SOME PEOPLE DON'T READ!!! I think I have justified my backup strategy and the majority of businesses who use offsite tape backups do not have any additional redundancy beyond what is mentioned above(recovery blocks, par files, historical snapshots). Now I would like to eliminate the use of tapes for offsite backups, and instead use a backup service. Most however are extremely costly for $/gb/month storage. I don't mind paying for transfer bandwidth, but the cost of storage is way to high. All of them advertise that they maintain backups of the data, and I imagine they use RAID as well. Obviously if you were using them to host servers this would all be necessary, but for my scenario, I am simply replacing my offsite backups with such a service. So there is no need for RAID, and absolutely no value in another layer of backups of backups. My one and only question: "Are there online data-storage/backup services that do not use redundancy or offer backups(backups of my backups) as part of their packages, and thus are more reasonably priced?" NOT my question: "Is this a flawed strategy?" I don't care if you think this is a good strategy or not. I know it pretty standard. Very few people make an extra copy of their offsite backups. They already have local backups that they can use to replace the remote backups if something catastrophic happens at the remote site. Please limit your responses to the question posed. Sorry if I seem a little abrasive, but I had some trolls in my last post who didn't read my requirements nor my question, and were trying to go off answering a totally different question. I made it pretty clear, but didn't try to justify my strategy, because I didn't ask about whether my strategy was justifyable. So I apologize if this was lengthy, as it really didn't need to be, but since there are so many trolls here who try to sidetrack questions by responding without addressing the question at hand.

    Read the article

  • DHCP and DNS services configuration for VOIP system, windows domain, etc

    - by Stemen
    My company has numerous physical offices (for purposes of this discussion, 15 buildings). Some of them are well-connected to our primary data center via fiber. Others will be connected to the data center by P2P T1. We are in the beginning stages of implementing an Avaya VOIP telephone system, and we will be replacing a significant portion of our network infrastructure in the process. In tandem with the phone system implementation, we are going to be re-addressing some of our networks, and consolidating most of our Windows domains into one (not all domains, just most). We currently have quite a few Windows domains, and they of course each have their own DNS zones. A few of those networks currently use DHCP, but the majority use static IP assignments for every device. I'm tired of managing static assignments -- I want to use DHCP configuration on everything except servers. Printers and etc will have DHCP reservations. The new IP phones will need to get IP addresses from DHCP, though they need to be in a separate VLAN from the computers/printers/etc. The computers and printers need to be registered in DNS. That's currently handled by the Windows DHCP servers on each of the respective domains. We need to place a priority on DHCP and DNS being available on a per-site basis (in case something were to interrupt the WAN connection) for computers and (primarily) phones. Smaller locations (which will have IP phones but not be a member of any Windows domain) will not have any Windows DNS/DHCP server(s) available. We also are looking for the easiest way to replace a part if it were to fail. That is to say, if a server/appliance/router hosting DHCP were to crash hard, and we couldn't extremely quickly recover the DHCP reservations and leases (and subsequently restore them onto a cold spare), we anticipate that bad things could happen. What is the best idea for how to re-implement DNS and DHCP keeping all of the above in mind? Some thoughts that have been raised (by myself or my coworkers): Use Windows DNS and DHCP servers, where they exist, and use IP helpers to route DHCP requests to some other Windows server if necessary. May not be acceptable if the WAN goes down and clients don't get a DHCP response. Use Windows DNS (everywhere, over WAN in some cases) and a mix of Windows DHCP and DHCP provided by Cisco routers. Every site would be covered for DHCP, but from what I've read, Cisco routers can't handle dynamic registration of DHCP clients to Windows DNS servers, which might create a problem where Cisco routers are used for DHCP. Use Windows DNS (everywhere, over WAN in some cases) and a mix of Windows DHCP and DHCP provided by some service running on an extremely low-price linux server. Is there any such software that would allow DHCP leases granted by these linux boxes to be dynamically registered on the Windows DNS servers? Come up with a Linux solution for both DNS and DHCP, and deploy low-price linux servers to every site. Requirements would be that the DNS zone be multi-master (like Windows DNS integrated with Active Directory), that DHCP be able to make dynamic DNS registrations in that zone, for every lease (where a hostname is provided and is thus possible), and that multiple servers be either authoritative for the same DHCP scope or at least receiving a real-time copy / replication / sync of the leases table so that if one server dies, we still know which MAC has what address. Purchase dedicated DNS/DHCP appliances, deploying to all sites. From what I read/see, this solves all of our technical problems. Then come the financial problems... I don't have a ton of money to spend on this. Or, some other solution that we've thus far overlooked and will consider upon recommendation. Can Cisco routers or Windows servers sync DHCP lease tables so that multiple servers can be authoritative (or active/passive for all I care) for the same scope, in case one of the partners were to fail? I've read online (repeatedly) that ISC's DHCP is able to maintain the same lease table across multiple servers, in order to solve this problem. Does anyone have any experience or advice to regarding that?

    Read the article

  • Recommendations for distributed processing/distributed storage systems

    - by Eddie
    At my organization we have a processing and storage system spread across two dozen linux machines that handles over a petabyte of data. The system right now is very ad-hoc; processing automation and data management is handled by a collection of large perl programs on independent machines. I am looking at distributed processing and storage systems to make it easier to maintain, evenly distribute load and data with replication, and grow in disk space and compute power. The system needs to be able to handle millions of files, varying in size between 50 megabytes to 50 gigabytes. Once created, the files will not be appended to, only replaced completely if need be. The files need to be accessible via HTTP for customer download. Right now, processing is automated by perl scripts (that I have complete control over) which call a series of other programs (that I don't have control over because they are closed source) that essentially transforms one data set into another. No data mining happening here. Here is a quick list of things I am looking for: Reliability: These data must be accessible over HTTP about 99% of the time so I need something that does data replication across the cluster. Scalability: I want to be able to add more processing power and storage easily and rebalance the data on across the cluster. Distributed processing: Easy and automatic job scheduling and load balancing that fits with processing workflow I briefly described above. Data location awareness: Not strictly required but desirable. Since data and processing will be on the same set of nodes I would like the job scheduler to schedule jobs on or close to the node that the data is actually on to cut down on network traffic. Here is what I've looked at so far: Storage Management: GlusterFS: Looks really nice and easy to use but doesn't seem to have a way to figure out what node(s) a file actually resides on to supply as a hint to the job scheduler. GPFS: Seems like the gold standard of clustered filesystems. Meets most of my requirements except, like glusterfs, data location awareness. Ceph: Seems way to immature right now. Distributed processing: Sun Grid Engine: I have a lot of experience with this and it's relatively easy to use (once it is configured properly that is). But Oracle got its icy grip around it and it no longer seems very desirable. Both: Hadoop/HDFS: At first glance it looked like hadoop was perfect for my situation. Distributed storage and job scheduling and it was the only thing I found that would give me the data location awareness that I wanted. But I don't like the namename being a single point of failure. Also, I'm not really sure if the MapReduce paradigm fits the type of processing workflow that I have. It seems like you need to write all your software specifically for MapReduce instead of just using Hadoop as a generic job scheduler. OpenStack: I've done some reading on this but I'm having trouble deciding if it fits well with my problem or not. Does anyone have opinions or recommendations for technologies that would fit my problem well? Any suggestions or advise would be greatly appreciated. Thanks!

    Read the article

  • Logging to MySQL without empty rows/skipped records?

    - by Lee Ward
    I'm trying to figure out how to make Squid proxy log to MySQL. I know ACL order is pretty important but I'm not sure if I understand exactly what ACLs are or do, it's difficult to explain, but hopefully you'll see where I'm going with this as you read! I have created the lines to make Squid interact with a helper in squid.conf as follows: external_acl_type mysql_log %LOGIN %SRC %PROTO %URI php /etc/squid3/custom/mysql_lg.php acl ex_log external mysql_log http_access allow ex_log The external ACL helper (mysql_lg.php) is a PHP script and is as follows: error_reporting(0); if (! defined(STDIN)) { define("STDIN", fopen("php://stdin", "r")); } $res = mysql_connect('localhost', 'squid', 'testsquidpw'); $dbres = mysql_select_db('squid', $res); while (!feof(STDIN)) { $line = trim(fgets(STDIN)); $fields = explode(' ', $line); $user = rawurldecode($fields[0]); $cli_ip = rawurldecode($fields[1]); $protocol = rawurldecode($fields[2]); $uri = rawurldecode($fields[3]); $q = "INSERT INTO logs (id, user, cli_ip, protocol, url) VALUES ('', '".$user."', '".$cli_ip."', '".$protocol."', '".$uri."');"; mysql_query($q) or die (mysql_error()); if ($fault) { fwrite(STDOUT, "ERR\n"); }; fwrite(STDOUT, "OK\n"); } The configuration I have right now looks like this: ## Authentication Handler auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 30 auth_param negotiate program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic auth_param negotiate children 5 # Allow squid to update log external_acl_type mysql_log %LOGIN %SRC %PROTO %URI php /etc/squid3/custom/mysql_lg.php acl ex_log external mysql_log http_access allow ex_log acl localnet src 172.16.45.0/24 acl AuthorizedUsers proxy_auth REQUIRED acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl CONNECT method CONNECT acl blockeddomain url_regex "/etc/squid3/bl.acl" http_access deny blockeddomain deny_info ERR_BAD_GENERAL blockeddomain # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # Allow the internal network access to this proxy http_access allow localnet # Allow authorized users access to this proxy http_access allow AuthorizedUsers # FINAL RULE - Deny all other access to this proxy http_access deny all From testing, the closer to the bottom I place the logging lines the less it logs. Oftentimes, it even places empty rows in to the MySQL table. The file-based logs in /var/log/squid3/access.log are correct but many of the rows in the access logs are missing from the MySQL logs. I can't help but think it's down to the order I'm putting lines in because I want to log everything to MySQL, unauthenticated requests, blocked requests, which category blocked a specific request. The reason I want this in MySQL is because I'm trying to have everything managed via a custom web-based frontend and want to avoid using any shell commands and access to system log files if I can help it. The end result is to make it as easy as possible to maintain without keeping staff waiting on the phone whilst I add a new rule and reload the server! Hopefully someone can help me out here because this is very much a learning experience for me and I'm pretty stumped. Many thanks in advance for any help!

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >