Search Results

Search found 2088 results on 84 pages for 'jobs'.

Page 75/84 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Are game engines developed in the USA?

    - by numerical25
    I just finished talking to a Flashpoint Academy recruiter about their curriculum. I told him that after I graduate, I would like to go more in-depth with learning game engines and how to make games. So I asked him did their school teach anything in regards to learning any graphics API such as DirectX. He asked me to elaborate a little more as if he was not sure what I was talking about. So I asked did their school teach on how to build game engines. He said "no we only teach with the tools at hand, such as XNA, or the Unreal engine". He further said "most jobs that deal with building game engines go overseas and most of creative work is done in the United States." To be honest, I really had no intentions of going to this school. I just wanted to learn more about the school in case I had second thoughts somewhere down the line. To me I thought it was a bunch of BS, but my question is to you guys, "is it" ??

    Read the article

  • Formulae for U and V buffer offset

    - by Abhi
    Hi all ! What should be the buffer offset value for U & V in YUV444 format type? Like for an example if i am using YV12 format the value is as follows: ppData.inputIDMAChannel.UBufOffset = iInputHeight * iInputWidth + (iInputHeight * iInputWidth)/4; ppData.inputIDMAChannel.VBufOffset = iInputHeight * iInputWidth; iInputHeight = 160 & iInputWidth = 112 ppdata is an object for the following structure: typedef struct ppConfigDataStruct { //--------------------------------------------------------------- // General controls //--------------------------------------------------------------- UINT8 IntType; // FIRSTMODULE_INTERRUPT: the interrupt will be // rised once the first sub-module finished its job. // FRAME_INTERRUPT: the interrput will be rised // after all sub-modules finished their jobs. //--------------------------------------------------------------- // Format controls //--------------------------------------------------------------- // For input idmaChannel inputIDMAChannel; BOOL bCombineEnable; idmaChannel inputcombIDMAChannel; UINT8 inputcombAlpha; UINT32 inputcombColorkey; icAlphaType alphaType; // For output idmaChannel outputIDMAChannel; CSCEQUATION CSCEquation; // Selects R2Y or Y2R CSC Equation icCSCCoeffs CSCCoeffs; // Selects R2Y or Y2R CSC Equation icFlipRot FlipRot; // Flip/Rotate controls for VF BOOL allowNopPP; // flag to indicate we need a NOP PP processing }*pPpConfigData, ppConfigData; and idmaChannel structure is as follows: typedef struct idmaChannelStruct { icFormat FrameFormat; // YUV or RGB icFrameSize FrameSize; // frame size UINT32 LineStride;// stride in bytes icPixelFormat PixelFormat;// Input frame RGB format, set NULL // to use standard settings. icDataWidth DataWidth;// Bits per pixel for RGB format UINT32 UBufOffset;// offset of U buffer from Y buffer start address // ignored if non-planar image format UINT32 VBufOffset;// offset of U buffer from Y buffer start address // ignored if non-planar image format } idmaChannel, *pIdmaChannel; I want the formulae for ppData.inputIDMAChannel.UBufOffset & ppData.inputIDMAChannel.VBufOffset for YUV444 Thanks in advance

    Read the article

  • Resque: Slow worker startup and Forking

    - by David John
    I'm currently moving my application from a Linode setup to EC2. Redis is currently installed on a remote instance with various worker instances interacting with the queue. Thats all going fantastic. My problem is with the amount of time it takes for a worker to be 'instantiated' and slow forking. Starting a worker will usually take between 30 seconds and a minute(from god.rb starting the worker rake task and the worker actively starting work on the queue). I could live with that, but I've not experienced such a wait time on my current Linode production box so I believe its one of my symptoms to a bigger problem. Next issue is that jobs that took a second or less in my previous environment now seem to take about 5 to 10 times longer.. I'm assuming this must be some sort of issue with my Ubuntu install on EC2? One notable difference is that I'm running REE 1.8.7-2010.01 in my new setup, and REE 1.8.6 on the old Linode boxes. Anyone else experienced these issues?

    Read the article

  • JQuery validate e-mail address regex

    - by RussP
    Hi folks, not too sure about how to do this. I need/want to validate email addresses by regex using something like this: [a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*@(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+(?:[A-Z]{2}|com|org|net|edu|gov|mil|biz|info|mobi|name|aero|asia|jobs|museum) Not I need to run this in a JQ function like this: Where does the validation go and wht is the expression? - please $j("#fld_emailaddress").live('change',function() { var emailaddress = $j("#fld_emailaddress").val(); // validation here? if(emailaddress){} // end validation $j.ajax({ type: "POST", url: "../ff-admin/ff-register/ff-user-check.php", data: "fld_emailaddress="+ emailaddress, success: function(msg) { if(msg == 'OK') { $j("#fld_username").attr('disabled',false); $j("#fld_password").attr('disabled',false); $j("#cmd_register_submit").attr('disabled',false); $j("#fld_emailaddress").removeClass('object_error'); // if necessary $j("#fld_emailaddress").addClass("object_ok"); $j('#email_ac').html('&nbsp;<img src="img/cool.png" align="absmiddle"> <font color="Green"> Your email <strong>'+ emailaddress+'</strong> is OK.</font> '); } else { $j("#fld_username").attr('disabled',true); $j("#fld_password").attr('disabled',true); $j("#cmd_register_submit").attr('disabled',true); $j("#fld_emailaddress").removeClass('object_ok'); // if necessary $j("#fld_emailaddress").addClass("object_error"); $j('#email_ac').html(msg); } } }); });

    Read the article

  • Preventing a heavy process from sinking in the swap file

    - by eran
    Our service tends to fall asleep during the nights on our client's server, and then have a hard time waking up. What seems to happen is that the process heap, which is sometimes several hundreds of MB, is moved to the swap file. This happens at night, when our service is not used, and others are scheduled to run (DB backups, AV scans etc). When this happens, after a few hours of inactivity the first call to the service takes up to a few minutes (consequent calls take seconds). I'm quite certain it's an issue of virtual memory management, and I really hate the idea of forcing the OS to keep our service in the physical memory. I know doing that will hurt other processes on the server, and decrease the overall server throughput. Having that said, our clients just want our app to be responsive. They don't care if nightly jobs take longer. I vaguely remember there's a way to force Windows to keep pages on the physical memory, but I really hate that idea. I'm leaning more towards some internal or external watchdog that will initiate higher-level functionalities (there is already some internal scheduler that does very little, and makes no difference). If there were a 3rd party tool that provided that kind of service is would have been just as good. I'd love to hear any comments, recommendations and common solutions to this kind of problem. The service is written in VC2005 and runs on Windows servers.

    Read the article

  • What are Sharepoint(MOSS 2007) Developement/Deployment best practices.

    - by Satish
    We are deploying sharepoint MOSS 2007 at our work. I'm trying to come up with a sharepoint development and deployment methodology. We have Dev/QA/Prod environments and I need a way, preferably automated to deploy changes from Dev to QA and from there to prod. We are creating site collections web parts etc. Some of it is done directly within sharepoint, some through Sharepoint designer or visual studio. I'm looking for a way to extract this and deploy it to other enviornments. I tried stsadm backup/restore import/export etc but they all move the data along with it as well. I just need the structure deployed. Content deployment paths and jobs does the same thing as well. We use MSBuild & Curisecontrol.net for other .net projects to automate build/deployment process. I'm looking for something similar with sharepoint if possible. What are your best practices for this? Since my team is learning we don't have a defined process and we are open to change our development process if needed.

    Read the article

  • EF4 Code-First CTP5 Many-to-one

    - by Kevin McKelvin
    I've been trying to get EF4 CTP5 to play nice with an existing database, but struggling with some basic mapping issues. I have two model classes so far: public class Job { [Key, Column(Order = 0)] public int JobNumber { get; set; } [Key, Column(Order = 1)] public int VersionNumber { get; set; } public virtual User OwnedBy { get; set; } } and [Table("Usernames")] public class User { [Key] public string Username { get; set; } public string EmailAddress { get; set; } public bool IsAdministrator { get; set; } } And I have my DbContext class exposing those as IDbSet I can query my users, but as soon as I added the OwnedBy field to the Job class I began getting this error in all my tests for the Jobs: Invalid column name 'UserUsername'. I want this to behave like NHibernate's many-to-one, whereas I think EF4 is treating it as a complex type. How should this be done?

    Read the article

  • Hibernate does not allow an embedded object with an int field to be null?

    - by Jason Novak
    Hibernate does not allow me to persist an object that contains an null embedded object with an integer field. For example, if I have a class called Thing that looks like this @Entity public class Thing { @Id public String id; public Part part; } where Part is an embeddable class that looks like this @Embeddable public class Part { public String a; public int b; } then trying to persist a Thing object with a null Part causes Hibernate to throw an Exception. In particular, this code Thing th = new Thing(); th.id = "thing.1"; th.part = null; session.saveOrUpdate(th); causes Hibernate to throw this Exception org.hibernate.PropertyValueException: not-null property references a null or transient value: com.ace.moab.api.jobs.Thing.part My guess is that this is happening because Part is an embedded class and so Part.a and Part.b are simply columns in the Thing database table. Since the Thing.part is null Hibernate wants to set the Part.a and Part.b column values to null for the row for thing.1. However, Part.b is an integer and Hibernate will not allow integer columns in the database to be null. This is what causes the Exception, right? So I am looking for workarounds for this problem. I noticed making Part.b an Integer instead of an int seems to work, but for reasons I won't bore you with this is not a good option for us. Thanks!

    Read the article

  • Rails problem with Delayed_Job and Active Record

    - by Michael Waxman
    I'm using Delayed_Job to grab a Twitter user's data from the API, but it's not saving it in the model for some reason! Please help! (code below) class BandJob < Struct.new(:band_id, :band_username) #parameter def perform require 'json' require 'open-uri' band = Band.find_by_id(band_id) t = JSON.parse(open("http://twitter.com/users/show/#{band_username}.json").read) band.screen_name = t['screen_name'] band.profile_background_image = t['profile_background_image_url'] band.url = 'http://' + band_username + '.com' band.save! end end To clarify, I'm actually not getting any errors, it's just not saving. Here's what my log looks like: * [JOB] acquiring lock on BandJob [4;36;1mDelayed::Job Update (3.1ms)[0m [0;1mUPDATE "delayed_jobs" SET locked_at = '2009-11-09 18:59:45', locked_by = 'host:dhcp128036151228.central.yale.edu pid:2864' WHERE (id = 10442 and (locked_at is null or locked_at < '2009-11-09 14:59:45') and (run_at <= '2009-11-09 18:59:45')) [0m [4;35;1mBand Load (1.5ms)[0m [0mSELECT * FROM "bands" WHERE ("bands"."id" = 34) LIMIT 1[0m [4;36;1mBand Update (0.6ms)[0m [0;1mUPDATE "bands" SET "updated_at" = '2009-11-09 18:59:45', "profile_background_image" = 'http://a3.twimg.com/profile_background_images/38193417/fbtile4.jpg', "url" = 'http://Coldplay.com', "screen_name" = 'coldplay' WHERE "id" = 34[0m [4;35;1mDelayed::Job Destroy (0.5ms)[0m [0mDELETE FROM "delayed_jobs" WHERE "id" = 10442[0m * [JOB] BandJob completed after 0.5448 1 jobs processed at 1.8011 j/s, 0 failed ... Thanks!

    Read the article

  • MySQL 5.5.8 Gets Periodic Lag

    - by CYREX
    Am using MySQL 5.5.8 on an Ubuntu system and every X amount of time it creates a huge lag that lasts a couple of seconds. Then all goes back to normal until the next lag. The time period varies but it looks like it happen periodically. Am using InnoDB. It is like hiccups in the MySQL. What could be creating this sort of periodic problem. Do not have any cron jobs or process running every time the X period happens. The X period could be between 30 minutes to 2 hours. So for example it could happen every 30 minutes for the next 12 hours or it could happen every 2 hours for the next 8 hours. key_buffer_size = 256M max_allowed_packet = 1M table_cache = 1024 table_open_cache = 1024 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 32M thread_cache_size = 128 query_cache_size= 128M log-slow-queries = slow.log long_query_time = 5 log-queries-not-using-indexes # Try number of CPU's*2 for thread_concurrency thread_concurrency = 4 max_connections=512 #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /usr/local/mysql/data # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_buffer_pool_size = 1G #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 64M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 0 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 64M sort_buffer_size = 64M read_buffer = 2M write_buffer = 2M There are about 200+ tables divided in 3 databases. The most written too is in InnoDB. The other ones are more read. Several of the tables in the InnoDB have more than 2 million records. The other databases top at about 400 thousand records and do not change so often. The PC is a Core 2 Duo 8400 with 4GB RAM, 32Bit Ubuntu.

    Read the article

  • Where are the network boundaries in the Java Connector Architecture (JCA)?

    - by Laird Nelson
    I am writing a JCA resource adapter. I'm also, as I go, trying to fully understand the connection management portion of the JCA specification. As a thought experiment, pretend that the only client of this adapter will be a Swing Java Application Client located on a different machine. Also assume that the resource adapter will communicate with its "enterprise information system" (EIS) over the network as well. As I understand the JCA specification, the .rar file is deployed to the application server. The application server creates the .rar file's implementation of the ManagedConnectionFactory interface. It then asks it to produce a connection factory, which is the opaque object that is deployed to JNDI for the user to use to obtain a connection to the resource. (In the case of JDBC, the connection factory is a javax.sql.DataSource.) It is a requirement that the connection factory retain a reference to the application-server-supplied ConnectionManager, which, in turn, is required to be Serializable. This makes sense--in order for the connection factory to be stored in JNDI, it must be serializable, and in order for it to keep a reference to the ConnectionManager, the ConnectionManager must also be serializable. So fine, this little object graph gets installed in the application client's JNDI tree. This is where I start to get queasy. Is the ConnectionManager--the piece supplied by the application server that is supposed to handle connection management, sharing, pooling, etc.--wholly present on the client at this point? One of its jobs is to create ManagedConnection instances, and a ManagedConnection is not required to be Serializable, and the user connection handles it vends are also not required to be Serializable. That suggests to me that the whole connection pooling machinery is shipped wholesale to the application client and stuffed into its JNDI tree. Does this all mean that JCA interactions from the client side bypass the server-side componentry of the application server? Where are the network boundaries in the JCA API?

    Read the article

  • Best method to search hierarchical data

    - by WDuffy
    I'm looking at building a facility which allows querying for data with hierarchical filtering. I have a few ideas how I'm going to go about it but was wondering if there are any recommendations or suggestions that might be more efficient. As an example imagine that a user is searching for a job. The job areas would be as follows. 1: Scotland 2: --- West Central 3: ------ Glasgow 4: ------ Etc 5: --- North East 6: ------ Ayrshire 7: ------ Etc A user can search specific (i.e. Glasgow) or in a larger area (i.e. Scotland). The two approaches I am considering are: keep a note of children in the database for each record (i.e. cat 1 would have 2, 3, 4 in its children field) and query against that record with a SELECT * FROM Jobs WHERE Category IN Areas.childrenField. Use a recursive function to find all results who have a relation to the selected area. The problems I see from both are: Holding this data in the db will mean having to keep track of all changes to structure. Recursion is slow and inefficent. Any ideas, suggestion or recommendations on the best approach? I'm using C# ASP.NET with MSSQL 2005 DB.

    Read the article

  • Telerik chart not loading correctly in new window (ajax issue?)

    - by Phillip Schmidt
    I have a page which contains user controls with Telerik Charts (grids also, but they work fine). From this page, the user can click on a button to be redirected to a "Printer-Friendly Version" type page, which opens a new window via javascript and goes through a slightly different view (for formatting and stuff), but the telerik code is all the same. The problem is, my Chart displays just fine in the original window, but the new window displays basically an empty chart with no data. This bug is only present in IE, and only applies to Charts. Grids work fine, for whatever reason. I'm thinking this is due to differences in script caching between browsers -- correct me if I'm wrong, I'm semi-new to client-directed web development. Anyway I read somewhere that Telerik has issues with loading data and/or js files when loaded via ajax, so maybe that's the problem? If so, how could I get around this? And if not, any ideas on what could be causing this issue? It's causing me a great deal of frustration, since a print preview page seems like it should be the easiest of jobs. Edit: The charts are being rendered as html (if somebody can explain how to render them as images, that would be awesome). And dev tools shows basically the same thing between chrome and IE. Whenever my web service goes back up ill WinMerge them and look for any peculiarities/differences between them. In the mean time, though, the "render as an image" concept sounds promising. That way I could just save the image from the first page, and insert it right into the print preview page, right?. And since it's a print-preview page, it's not going to need to be interactive or anything, so that'd work out nicely. Another (important) Edit: These are probably the culprit... And here is a little more detail on that: And here is a side-by-side of it working(in chrome) and not working (in IE):

    Read the article

  • How to pass common arguments to Perl modules

    - by Leonard
    I'm not thrilled with the argument-passing architecture I'm evolving for the (many) Perl scripts that have been developed for some scripts that call various Hadoop MapReduce jobs. There are currently 8 scripts (of the form run_something.pl) that are run from cron. (And more on the way ... we expect anywhere from 1 to 3 more for every function we add to hadoop.) Each of these have about 6 identical command-line parameters, and a couple command line parameters that are similar, all specified with Euclid. The implementations are in a dozen .pm modules. Some of which are common, and others of which are unique.... Currently I'm passing the args globally to each module ... Inside run_something.pl I have: set_common_args (%ARGV); set_something_args (%ARGV); And inside Something.pm I have sub set_something_args { (%MYARGS) =@_; } So then I can do if ( $MYARGS{'--needs_more_beer'} ) { $beer++; } I'm seeing that I'm probably going to have additional "common" files that I'll want to pass args to, so I'll have three or four set_xxx_args calls at the top of each run_something.pl, and it just doesn't seem too elegant. On the other hand, it beats passing the whole stupid argument array down the call chain, and choosing and passing individual elements down the call chain is (a) too much work (b) error-prone (c) doesn't buy much. In lots of ways what I'm doing is just object-oriented design without the object-oriented language trappings, and it looks uglier without said trappings, but nonetheless ... Anyone have thoughts or ideas?

    Read the article

  • SQL Server job (stored proc) trace

    - by Jit
    Hi Friends, I need your suggestion on tracing the issue. We are running data load jobs at early morning and loading the data from Excel file into SQL Server 2005 db. When job runs on production server, many times it takes 2 to 3 hours to complete the tasks. We could drill down to one job step which is taking 99% of the total time to finish. While running the job step (stored procs) on staging environment (with the same production database restored) takes 9 to 10 minutes, the same takes hours on production server when it run at early morning as part of job. The production server always stuck up at the very job step. I would like to run trace on the very job step (around 10 stored procs run for each user in while loop within the job step) and collect the info to figure out the issue. What are the ways available in SQL Server 2005 to achieve the same? I want to run the trace only for these SPs and not for certain period time period on production server, as trace give lots of information and it becomes very difficult for me (as not being DBA) to analyze that much of trace information and figure out the issue. So I want to collect info about specific SPs only. Let me know what you suggest. Appreciate your time and help. Thanks.

    Read the article

  • Learning PHP - start out using a framework or no?

    - by Kevin Torrent
    I've noticed a lot of jobs in my area for PHP. I've never used PHP before, and figure if I can get more opportunities if I pick it up then it might be a good idea. The problem is that PHP without any framework is ugly and 99% of the time really bad code. All the tutorials and books I've seen are really lousy - it never shows any kind of good programming practice but always the quick and dirty kind of way of doing things. I'm afraid that trying to learn PHP this way will just imprint these bad practices in my head and make me waste time later trying to unlearn them. I've used C# in the past so I'm familiar with OOP and software design patterns and similar. Should I be trying to learn PHP by using one of the better known frameworks for it? I've looked at CakePHP, Symfony and the Zend Framework so far; Zend seems to be the most flexible without being too constraining like Cake and Symfony (although Symfony seemed less constraining than CakePHP which is trying too hard to be Ruby on Rails), but many tutorials for Zend I've seen assume you already know PHP and want to learn to use the framework. What would be my best opportunity for learning PHP, but learning GOOD PHP that uses real software engineering techniques instead of spaghetti code? It seems all the PHP books and resources either assume you are just using raw PHP and therefore showcase bade practices, or that you already know PHP and therefore don't even touch on parts of the language.

    Read the article

  • What is the most common way to use a middleware in node with express and connect

    - by Bernhard
    Thinking about the correct way, how to make use of middlewares in a node.js web project using express and connect which is growing up at the moment. Of course there are middlewares right now wich has to pass or extend requests globally but in a lot of cases there are special jobs like prepare incoming data and in this case the middleware would only work for a set of http-methods and routes. I've a component based architecture and each component brings it's own middleware layer which can implement those for requests this component can handle. On app startup any required component is loaded and prepared. Is it a good idea to bind the middleware code execution to URLs to keep cpu load lower or is it better to use middlewares only for global purposes? Here's some dummy how an url related middleware look like. app.use(function(req, res, next) { // Check if requested route is a part of the current component // or if the middleware should be passed on any request if (APP.controller.groups.Component.isExpectedRoute(req) || APP.controller.groups.Component.getConfig().MIDDLEWARE_PASS_ALL === true) { // Execute the midleware code here console.log('This is a route which should be afected by middleware'); ... next(); }else{ next(); } });

    Read the article

  • How to effectively execute this cron job?

    - by Lost_in_code
    I have a table with 200 rows. I'm running a cron job every 10 minutes to perform some kind of insert/update operation on the table. The operation needs to be performed only on 5 rows at a time every time the cron job runs. So in first 10 mins records 1-5 are updated, records 5-10 in the 20th minute and so on. When the cron job runs for the 20th time, all the records in the table would have been updated exactly once. This is what is to be achieved at least. And the next cron job should repeat the process again. The problem: is that, every time a cron job runs, the insert/update operation should be performed on N rows (not just 5 rows). So, if N is 100, all records would've been updated by just 2 cron jobs. And the next cron job would repeat the process again. Here's an example: This is the table I currently have (200 records). Every time a cron job executes, it needs to pick N records (which I set as a variable in PHP) and update the time_md5 field with the current time's MD5 value. +---------+-------------------------------------+ | id | time_md5 | +---------+-------------------------------------+ | 10 | 971324428e62dd6832a2778582559977 | | 72 | 1bd58291594543a8cc239d99843a846c | | 3 | 9300278bc5f114a290f6ed917ee93736 | | 40 | 915bf1c5a1f13404add6612ec452e644 | | 599 | 799671e31d5350ff405c8016a38c74eb | | 56 | 56302bb119f1d03db3c9093caf98c735 | | 798 | 47889aa559636b5512436776afd6ba56 | | 8 | 85fdc72d3b51f0b8b356eceac710df14 | | .. | ....... | | .. | ....... | | .. | ....... | | .. | ....... | | 340 | 9217eab5adcc47b365b2e00bbdcc011a | <-- 200th record +---------+-------------------------------------+ So, the first record(id 10) should not be updated more than once, till all 200 records are updated once - the process should start over once all the records are updated once. I have some idea on how this could be achieved, but I'm sure there are more efficient ways of doing it. Any suggestions?

    Read the article

  • Am I a dinosaur programmer?

    - by dlb
    I have been a professional programmer for more than 30 years, and have chosen a career path involving hands-on programming. Programming is something that I love, and I take great pride in the fact that I have continued to keep up to date with current technology. Projects on which I have worked include large enterprise projects as well as smaller desktop programs. The problem I am facing is that I do not have any web-based experience other than some web services. Most of the jobs now available have some web component. I have now been out of work for a year and a half, and have been keeping busy by studying technology that will bridge that gap: CSS, Java Script, JQuery, and Ruby on Rails; AJAX is next. Hiring managers give no consideration whatsoever to the studying that I have been doing. I know that I cannot compete at a senior software level, but companies will not hire someone with my experience at a more junior level. Is there any way to break out of this Catch 22?

    Read the article

  • Automatically use inclusion tags (?) in a template, depending on installed apps

    - by Ludwik Trammer
    The title may be a little confusing, but I don't know how else to call it. I would like to create a Django project with a large set of applications you could arbitrary turn on or off using INSTALLED_APPS option in settings.py (you would obviously also need to edit urls.py and run syncdb). After being turned on an app should be able to automatically: Register it's content in site-wide search. Luckily django-haystack has this built-in, so it's not a problem. Register cron jobs. django-cron does exactly that. Not a problem. Register a widget that should be displayed on the homepage. The homepage should include a list of boxes with widgets form different applications. I thought about inclusion tags, because you can put them anywhere on a page and they control both content and presentation. The problem is I don't know how to automatically get a list of inclusion tags provided by my applications, and display them one by one on a homepage. I need a way to register them somehow, and then display all registered tags.

    Read the article

  • Can you do this with Hudson?

    - by damian
    I want to create a hudson job, that takes an id as a parameter. And use that id to calculate the svn-repo path. Where I work you have a svn path for every issue that you resolve. And then all the issues are joined into a single svn-path. What I want to do is to run static code analysis on the partial issues. So I think maybe having an Ant build.xml that I use for every issue, then, parametrize the job with the issue id. I have tried to achieve that but the svn path doesn't replace the parameter. I have tried with #issueId, %issueId%, ${issueId} and ${env.issueId} without success. Jump error like: Location 'http://svn-path:8181/svn/devSet/issues/${env.chuid}' does not exist Checking out a fresh workspace because C:\Documents and Settings\dnoseda\.hudson\jobs\test\workspace\${env.chuid} doesn't exist Checking out http://svn-path:8181/svn/devSet/issues/${env.chuid} ERROR: Failed to check out http://svn-path:8181/svn/devSet/issues/${env.chuid} org.tmatesoft.svn.core.SVNException: svn: '/svn/!svn/bc/46190/devSet/issues/$%7Benv.chuid%7D' path not found: 404 Not Found (http://svn-path:8181) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:64) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:51) at I am think that I can not do what I want. Do you know how I can setup the correct configuration to achieve this matter? Thanks for any help. Edit The section of the configurate job that I want to put this parameter is this: <scm class="hudson.scm.SubversionSCM"> <locations> <hudson.scm.SubversionSCM_-ModuleLocation> <remote>http://svn-path:8181/svn/devSet/issues/${env.issueid}</remote> </hudson.scm.SubversionSCM_-ModuleLocation> </locations>

    Read the article

  • Should I go to school and get my degree in computer science?

    - by ryan
    I'll try and keep this short and simple. I've always enjoyed programming and I've been doing it since high school. Right after I graduated from high school (2002), I opted to skip college because I was offered a software engineer position. I quit after a couple of years later to team up on various startup companies. However, most of them did not launch as well as expected. But it honestly did not matter to me because I've learned so much from that experience. So fast forwarding to today, now turned 25, I need a job due to this tough economic climate. Looking on Craigslist, a lot of the listings require computer science degrees. It's evident now that programming is what I want to do because I seem to never get enough of it. But just the thought of having to push 2 years without attending any real computer class for an Associates at age 25 is very, very discouraging. And the thought of having to learn from basic (Hello WOOOOORRLLLD) just does not seem exciting. I guess I have 3 questions to wrap this up: Should I just suck it up and go back to school while working at McDonalds at age 25? Is there a way where I can just skip all the boring stuff and just get tested with what I know? From your experience, how many jobs use computer science degrees as prerequisites? Or am I screwed and better pray that my next startup will be the next big thing?

    Read the article

  • Using git (or some other VCS) at your company

    - by supercheetah
    Some friends of mine and I were talking recently about version control, and how they were using VSS at their jobs, and were probably going to be moving off of that soon. One of them said that his company will likely be going with Team Foundation Server. Eventually, the conversation did get around to talking about some of the open source VCSes out there, including git and SVN. None of us really knew about any companies that use either of these internally, although we imagined that a number of them did so for SVN, but we weren't too sure about git. I brought up Google and Android using it, but my friend figured that's only for the public facing source code, and that they may use something different for internal projects. Apparently it's more than just SCM that makes TFS so intriguing: Microsoft Sales people and support (although my friend did point out somethings to his managers that he thought might be misleading on MS' part) Integration of things beyond SCM, including project management (I'm just finding out that there are geared towards the same things for git) Again, it's Microsoft, and the transition from VSS to TFS seems logical (or does it?) I'm not much of a fan of SVN, so I didn't really bring it up much, but I am curious about whether or not git is used at your company for internal projects. Have you thought about it, and decided against it? Any reason why?

    Read the article

  • What are your Programming Falacies/Myths?

    - by pms1969
    I recently started a new job and as is typical of all jobs, if you've left, you get blamed for everything. Not long after I started there was a change required for an app (web based) that we maintain, and it was quickly pointed out that the actual code for this site had been lost a long time ago, and the only changes we could make to the it were ones that required changes to mark-up [it was a pre-compiled site]. Being new, I needed a little help finding my way around the code, and enlisted the services of one of my colleagues. Made my changes, and then re-enlisted his help to deploy it. While prepping for the deployment (getting the app on the QA server) we discovered that there were actually 2 different, very similarly named, folders in our source repository. It transpired that for the last year or so, mark-up changes had been made to the site directly, and these were the only differences with the code in the slightly incorrectly named folder in source control. So we did have all the code, and can now properly support the site. This put me in mind of a trick we played on a junior programmer once in a previous job, where we told him he couldn't/shouldn't do a certain thing in code as this would likely bring the server to it's knees and cost the company thousands of pounds (a gag that last months :-). And another one in the first programming job I took on - the batch commission run was just going to crash once a month and there was nothing to be done about it, causing a call out, and call out compensation for the on-call guy (a bug I fixed as soon as I became the on-call guy - 2am call outs don't work for me). So I was wondering... What other programming fallacies/myths are out there that are worth sharing?

    Read the article

  • Scalably processing large amount of comlpicated database data in PHP, many times a day.

    - by Eph
    I'm soon to be working on a project that poses a problem for me. It's going to require, at regular intervals throughout the day, processing tens of thousands of records, potentially over a million. Processing is going to involve several (potentially complicated) formulas and the generation of several random factors, writing some new data to a separate table, and updating the original records with some results. This needs to occur for all records, ideally, every three hours. Each new user to the site will be adding between 50 and 500 records that need to be processed in such a fashion, so the number will not be steady. The code hasn't been written, yet, as I'm still in the design process, mostly because of this issue. I know I'm going to need to use cron jobs, but I'm concerned that processing records of this size may cause the site to freeze up, perform slowly, or just piss off my hosting company every three hours. I'd like to know if anyone has any experience or tips on similar subjects? I've never worked at this magnitude before, and for all I know, this will be trivial to the server and not pose much of an issue. As long as ALL records are processed before the next three hour period occurs, I don't care if they aren't processed simultaneously (though, ideally, all records belonging to a specific user should be processed in the same batch), so I've been wondering if I should process in batches every 5 minutes, 15 minutes, hour, whatever works, and how best to approach this (and make it scalable in a way that is fair to all users)?

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >