Search Results

Search found 906 results on 37 pages for 'sean johnson'.

Page 18/37 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Symlinking (ln) faster than moving (mv)?

    - by Chad Johnson
    When we build web software releases, we prepare the release in a temporary directory and then replace the release directory with the temporary one just prepared: # Move and replace existing release directory. mv /path/to/httpdocs /path/to/httpdocs.before mv /path/to/$newReleaseName /path/to/httpdocs Under this scheme, it happens that with about 1 in every 15 releases, a user was using a file in the original release directory exactly at the time the commands above are run, and a fatal error occurs for that user. I am wondering if using symlinking like follows would be significantly faster, in terms of processing time, thereby helping to lessen the likelihood of this problem: # Remove and replace existing release symlink. ln -sf /path/to/$newReleaseName path/to/httpdocs

    Read the article

  • Is this a good centralized DVCS workflow?

    - by Chad Johnson
    I'm leaning toward using Mercurial, coming from Subversion, and I'd like to maintain a centralized workflow like I had with Subversion. Here is what I am thinking: stable (clone on server) default (branch) development (clone on server) default (branch) bugs (branch) developer1 (clone on local machine) developer2 (clone on local machine) developer3 (clone on local machine) feature1 (branch) developer3 (clone on local machine) feature2 (branch) developer1 (clone on local machine) developer2 (clone on local machine) As far as branches vs. clones is concerned, does this workflow sense? Do I have things straight? Also, the 'stable' clone IS the release. Does it make sense for the 'default' branch to be the release and what all other branches are ultimately merged into?

    Read the article

  • How to avoid code duplication for one-off methods with a slightly different signature

    - by Sean
    I am wrapping a number of functions from a vender API in C#. Each of the wrapping functions will fit the pattern: public IEnumerator<IValues> GetAggregateValues(string pointID, DateTime startDate, DateTime endDate, TimeSpan period) { // Validate Data // Break up Requesting Time-Span // Make Requests // Read Results (through another method call } 5 of the 6 requests are aggregate data pulls and have the same signature, so it makes sense to put them in one method and pass the aggregate type to avoid duplication of code. The 6th method however follows the exact same pattern with the same result-set, but is not an aggregate, so no time period is passed to the function (changing the signature). Is there an elegant way to handle this kind of situation without coding a one-off function to handle the non-aggregate request?

    Read the article

  • mysql index optimization for a table with multiple indexes that index some of the same columns

    - by Sean
    I have a table that stores some basic data about visitor sessions on third party web sites. This is its structure: id, site_id, unixtime, unixtime_last, ip_address, uid There are four indexes: id, site_id/unixtime, site_id/ip_address, and site_id/uid There are many different types of ways that we query this table, and all of them are specific to the site_id. The index with unixtime is used to display the list of visitors for a given date or time range. The other two are used to find all visits from an IP address or a "uid" (a unique cookie value created for each visitor), as well as determining if this is a new visitor or a returning visitor. Obviously storing site_id inside 3 indexes is inefficient for both write speed and storage, but I see no way around it, since I need to be able to quickly query this data for a given specific site_id. Any ideas on making this more efficient? I don't really understand B-trees besides some very basic stuff, but it's more efficient to have the left-most column of an index be the one with the least variance - correct? Because I considered having the site_id being the second column of the index for both ip_address and uid but I think that would make the index less efficient since the IP and UID are going to vary more than the site ID will, because we only have about 8000 unique sites per database server, but millions of unique visitors across all ~8000 sites on a daily basis. I've also considered removing site_id from the IP and UID indexes completely, since the chances of the same visitor going to multiple sites that share the same database server are quite small, but in cases where this does happen, I fear it could be quite slow to determine if this is a new visitor to this site_id or not. The query would be something like: select id from sessions where uid = 'value' and site_id = 123 limit 1 ... so if this visitor had visited this site before, it would only need to find one row with this site_id before it stopped. This wouldn't be super fast necessarily, but acceptably fast. But say we have a site that gets 500,000 visitors a day, and a particular visitor loves this site and goes there 10 times a day. Now they happen to hit another site on the same database server for the first time. The above query could take quite a long time to search through all of the potentially thousands of rows for this UID, scattered all over the disk, since it wouldn't be finding one for this site ID. Any insight on making this as efficient as possible would be appreciated :) Update - this is a MyISAM table with MySQL 5.0. My concerns are both with performance as well as storage space. This table is both read and write heavy. If I had to choose between performance and storage, my biggest concern is performance - but both are important. We use memcached heavily in all areas of our service, but that's not an excuse to not care about the database design. I want the database to be as efficient as possible.

    Read the article

  • Why would some POST data go missing when using Uploadify?

    - by Chad Johnson
    I have been using Uploadify in my PHP application for the last couple months, and I've been trying to track down an elusive bug. I receive emails when fatal errors occur, and they provide me a good amount of details. I've received dozens of them. I have not, however, been able to reproduce the problem myself. Some users (like myself) experience no problem, while others do. Before I give details of the problem, here is the flow. User visits edit screen for a page in the CMS I am using. Record id for the page is put into a form as a hidden value. User clicks the Uploadify browse button and selects a file (only single file selection is allowed). User clicks Submit button for my form. jQuery intercepts the form submit action, triggers Uploadify to start uploading, and returns false for the submit action (manually cancelling the form submit event so that Uploadify can take over). Uploadify uploads to a custom process script. Uploadify finishes uploading and triggers the Javascript completion callback. The Javascript callback calls $('#myForm').submit() to submit the form. Now that's what SHOULD happen. I've received reports of the upload freezing at 100% and also others where "I/O Error" is displayed. What's happening is, the form is submitting with the completion callback, but some post parameters present in the form are simply not in the post data. The id for the page, which earlier I said is added to the form as a hidden field, is simply not there in the post data ($_POST)--there is no item for 'id' in the $_POST array. The strange thing is, the post data DOES contain values for some fields. For instance, I have an input of type text called "name" which is for the record name, and it does show up in the post data. Here is what I've gathered: This has been happening on Mac OSX 10.5 and 10.6, Windows XP, and Windows 7. I can post exact user agent strings if that helps. Users must use Flash 10.0.12 or later. We've made it so the form reverts to using a normal "file" field if they have < 10.0.12. Does anyone have ANY ideas at all what the cause of this could be?

    Read the article

  • Raise event from http listener (Async listener handler)

    - by Sean
    Hello, I have created an simple web server, leveraging .NET HttpListener class. I am using ThreadPool.QueueUserWorkItem() to spawn a thread to listen to incoming requests. Threaded method uses HttpListener.BeginGetContext(callback, listener), and in callback method I resume with HttpListener.EndGetContext() as well as raise an even to notify UI that listener received data. This is the question - how to raise that event? Initially I used ThreadPool: ThreadPool.QueueUserWorkItem(state => ReceivedRequest(httpListenerContext, receivedRequestArgs)); But then started to doubt, maybe it should be a dedicated thread (as appose to waiting for a thread from pool): new Thread(() => ReceivedRequest(httpListenerContext, receivedRequestArgs)).Start(); Thoughts? 10X

    Read the article

  • NetBeans not finding JasperReports scriptlet

    - by Sean
    I'm using JasperReports 3.7.6 with NetBeans 6.9.1 and iReport 3.7.6. I have a report that uses scriptlets. When I run it from iReport everything is fine because I can tell iReport where to find the .jar file with the scriptlets. When I run that same report from a JSF-2.0 application the fields that rely on the scriptlet are not being populated correctly - i.e. the scriptlet isn't being called. I've tried putting the scriptlet in the project's library folder and I've tried copying the package containing the scriptlet into the project. Neither has worked. I'm not sure how I can get the report to call the scriptlets when it is run from my JSF project. Can anyone shed some light on this for me?

    Read the article

  • Why would some $_POST variables go missing for a form with PHP?

    - by Chad Johnson
    Sometimes, multiple times a day in fact, users of my web application are submitting a certain form which has about a dozen form fields, half of which are hidden fields, and half of the $_POST data is simply not present in the processing script. Note that the fields that are not present are at the very bottom of the form. I know this because this raises a fatal error, and an email is dispatched to me which includes the post data. And of course, neither I nor any of the developers on my team can reproduce the problem. Flash is involved in the process, as I'm using a library called Uploadify to display a progress meter. Here is the flow...does anyone have ANY ideas at all why some of the post data would be getting wiped out? User visits edit screen for a page in the CMS I am using. Record id for the page is put into a form as a hidden value. User clicks the Uploadify browse button and selects a file (only single file selection is allowed). User clicks Submit button for my form. jQuery intercepts the form submit action, triggers Uploadify to start uploading, and returns false for the submit action (manually cancelling the form submit event so that Uploadify can take over). Uploadify uploads to a custom process script. Uploadify finishes uploading and triggers the Javascript completion callback. The Javascript callback calls $('#myForm').submit() to submit the form. This happens on multiple browsers (Firefox 3.5, 3.6, Safari, Internet Explorer 7, 8) and multiple platforms (Mac OS 10.5, 10.6 and Windows XP, 7).

    Read the article

  • Linked List exercise, what am I doing wrong?

    - by Sean Ochoa
    Hey all. I'm doing a linked list exercise that involves dynamic memory allocation, pointers, classes, and exceptions. Would someone be willing to critique it and tell me what I did wrong and what I should have done better both with regards to style and to those subjects I listed above? /* Linked List exercise */ #include <iostream> #include <exception> #include <string> using namespace std; class node{ public: node * next; int * data; node(const int i){ data = new int; *data = i; } node& operator=(node n){ *data = *(n.data); } ~node(){ delete data; } }; class linkedList{ public: node * head; node * tail; int nodeCount; linkedList(){ head = NULL; tail = NULL; } ~linkedList(){ while (head){ node* t = head->next; delete head; if (t) head = t; } } void add(node * n){ if (!head) { head = n; head->next = NULL; tail = head; nodeCount = 0; }else { node * t = head; while (t->next) t = t->next; t->next = n; n->next = NULL; nodeCount++; } } node * operator[](const int &i){ if ((i >= 0) && (i < nodeCount)) throw new exception("ERROR: Invalid index on linked list.", -1); node *t = head; for (int x = i; x < nodeCount; x++) t = t->next; return t; } void print(){ if (!head) return; node * t = head; string collection; cout << "["; int c = 0; if (!t->next) cout << *(t->data); else while (t->next){ cout << *(t->data); c++; if (t->next) t = t->next; if (c < nodeCount) cout << ", "; } cout << "]" << endl; } }; int main (const int & argc, const char * argv[]){ try{ linkedList * myList = new linkedList; for (int x = 0; x < 10; x++) myList->add(new node(x)); myList->print(); }catch(exception &ex){ cout << ex.what() << endl; return -1; } return 0; }

    Read the article

  • Is It Possible To Use Javascript/CSS To Swap Style Sheets When A Mobile Device Rotates?

    - by Sean M
    I am working on a site that must be designed with mobile accessibility in mind. As part of our brainstorming, we wondered whether it's possible to detect, for a mobile browser (i.e. Mobile Safari or the Android browser), when the viewing device has changed orientation, and to use that as a trigger to change page content? As the title of this question implies, our best-case scenario is the ability to detect the orientation change and use it to alter the CSS on the fly so as to present a slightly different page for landscape versus portrait. Of course we can just design for a page that looks good one way and make it obvious that it's supposed to be viewed that way, but the cool-stuff factor of a page that looks good either way is pretty appealing. Is this idea implementable? Practical?

    Read the article

  • Project Euler (P14): recursion problems

    - by sean mcdaid
    Hi I'm doing the Collatz sequence problem in project Euler (problem 14). My code works with numbers below 100000 but with numbers bigger I get stack over-flow error. Is there a way I can re-factor the code to use tail recursion, or prevent the stack overflow. The code is below: import java.util.*; public class v4 { // use a HashMap to store computed number, and chain size static HashMap<Integer, Integer> hm = new HashMap<Integer, Integer>(); public static void main(String[] args) { hm.put(1, 1); final int CEILING_MAX=Integer.parseInt(args[0]); int len=1; int max_count=1; int max_seed=1; for(int i=2; i<CEILING_MAX; i++) { len = seqCount(i); if(len > max_count) { max_count = len; max_seed = i; } } System.out.println(max_seed+"\t"+max_count); } // find the size of the hailstone sequence for N public static int seqCount(int n) { if(hm.get(n) != null) { return hm.get(n); } if(n ==1) { return 1; } else { int length = 1 + seqCount(nextSeq(n)); hm.put(n, length); return length; } } // Find the next element in the sequence public static int nextSeq(int n) { if(n%2 == 0) { return n/2; } else { return n*3+1; } } }

    Read the article

  • Python - Test directory permissions

    - by Sean
    In Python on Windows, is there a way to determine if a user has permission to access a directory? I've taken a look at os.access but it gives false results. >>> os.access('C:\haveaccess', os.R_OK) False >>> os.access(r'C:\haveaccess', os.R_OK) True >>> os.access('C:\donthaveaccess', os.R_OK) False >>> os.access(r'C:\donthaveaccess', os.R_OK) True Am I doing something wrong? Is there a better way to check if a user has permission to access a directory?

    Read the article

  • What does a modern, standard Microsoft-based technology stack look like?

    - by Sean Owen
    Let's say I asked Microsoft to describe the perfect, modern, Microsoft-based technology stack to power a standard e-commerce web site, which perhaps has a simple 2-tier web/database architecture. What would it be like? Yes, I'm just looking for a list of product / technology names. For example, in the J2EE world, I might describe a stack that includes: J2EE 6 standard JavaServer Faces Glassfish 3 MySQL 5.1.x I'm guessing this stack includes some combination of .NET, SQL Server, ASP.NET, IIS, etc. but I am not familiar with this world. Looking for ideas on the equivalent in Microsoft-land.

    Read the article

  • Help with SSh Tunnel [closed]

    - by Andrew Johnson
    I am running a Django instance locally and doing some Facebook development. So, I set up a port on a remote machine to forward to my local machine, so that Facebook can hit the web server, and have the requests forwarded to my local machine. Unfortunately, I'm getting the following error in my browser when I try and access the page: http://dev.thegreathive.com/ Any idea what I'm doing wrong? I think the problem is on my local machine, since if I kill the SSH tunnel, the error message changes.

    Read the article

  • Inconsistency in modified/created/accessed time on mac

    - by Seth Johnson
    I'm having trouble using os.utime to correctly set the modification time on the mac (Mac OS X 10.6.2, running Python 2.6.1 from /usr/bin/python). It's not consistent with the touch utility, and it's not consistent with the properties displayed in the Finder's "get info" window. Consider the following command sequence. The 'created' and 'modified' times in the plain text refer to the "get info" window attributes. As a reminder, os.utime takes arguments (filename, (atime, mtime)). >>> import os >>> open('tempfile','w').close() 'created' and 'modified' are both the current time. >>> os.utime('tempfile', (1000000000, 1500000000) ) 'created' is the current time, 'modified' is July 13, 2017. >>> os.utime('tempfile', (1000000000, 1000000000) ) 'created' and 'modified' are both September 8, 2001. >>> os.path.getmtime('tempfile') 1000000000.0 >>> os.path.getctime('tempfile') 1269021939.0 >>> os.path.getatime('tempfile') 1269021951.0 ...but the os.path.get?time and os.stat don't reflect it. >>> os.utime('tempfile', (1500000000, 1000000000) ) 'created' and 'modified' are still both September 8, 2001. >>> os.utime('tempfile', (1500000000, 1500000000) ) 'created' is September 8, 2001, 'modified' is July 13, 2017. I'm not sure if this is a Python problem or a Mac stat problem. When I exit the Python shell and run touch -a -t 200011221234 tempfile neither the modification nor the creation times are changed, as expected. Then I run touch -m -t 200011221234 tempfile and both 'created' and 'modified' times are changed. Does anyone have any idea what's going on? How do I change the modification and creation times consistently on the mac? (Yes, I am aware that on Unixy systems there is no "creation time.")

    Read the article

  • How can I click a button behind a transparent UIView?

    - by Sean Clark Hess
    Let's say we have a view controller with one sub view. the subview takes up the center of the screen with 100 px margins on all sides. We then add a bunch of little stuff to click on inside that subview. We are only using the subview to take advantage of the new frame ( x=0, y=0 inside the subview is actually 100,100 in the parent view). Then, imagine that we have something behind the subview, like a menu. I want the user to be able to select any of the "little stuff" in the subview, but if there is nothing there, I want touches to pass through it (since the background is clear anyway) to the buttons behind it. How can I do this? It looks like touchesBegan goes through, but buttons don't work.

    Read the article

  • Multi-level clones with Git?

    - by Chad Johnson
    So, I'm thinking of having the following centralized setup with Git (each of these are clones): stable development developer1 developer2 developer3 So, I created my stable repository git --bare init made the 'development' clone git clone ssh://host.name//path/to/stable/project.git development and made a 'developer' clone git clone ssh://host.name//path/to/development/project.git developer So, now, I make a change, commit, and then I push from my developer account git commit --all git push and the change goes to the development clone. But now, when I ssh to the server, go to the development clone directory, and run "git fetch" or "get pull", I don't see the changes. So what do I do? Am I totally misunderstanding things and doing things wrong? How can I see the changes in the 'development' clone that I pushed from my 'developer' clone? This worked fine in Mercurial.

    Read the article

  • How to break this string and sort on version number

    - by Sean P
    I have an ASP app that has a string array as such (there are much more than this): 7.5.0.17 Date: 05_03_10 7.5.0.18 Date: 05_03_10 7.5.0.19 Date: 05_04_10 7.5.0.2 Date: 02_19_10 7.5.0.20 Date: 05_06_10 7.5.0.3 Date: 02_26_10 7.5.0.4 Date: 03_02_10 7.5.0.5 Date: 03_08_10 7.5.0.6 Date: 03_12_10 7.5.0.7 Date: 03_19_10 7.5.0.8 Date: 03_25_10 7.5.0.9 Date: 03_26_10 7.5.1.0 Date: 05_06_10 How do I go about sorting these string by version descending?

    Read the article

  • Can Automapper populate a strongly typed object from the data in a NameValueCollection?

    - by Seth Petry-Johnson
    Can Automapper map values from a NameValueCollection onto an object, where target.Foo receives the value stored in the collection under the "Foo" key? I have a business object that stores some data in named properties and other data in a property bag. Different views make different assumptions about the data in the property bag, which I capture in page-specific view models. I want to use AutoMapper to map both the "inherent" attributes (the ones that always exist) as well as the "dynamic" attributes (the ones that vary per view and may or may not exist).

    Read the article

  • How important is W3C XHTML/CSS validation when finalizing work?

    - by Andrew G. Johnson
    Even though I always strive for complete validation these days, I often wonder if it's a waste of time. If the code runs and it looks the same in all browsers (I use browsershots.org to verify) then do I need to take it any further or am I just being overly anal? What level do you hold your code to when you create it for: a) yourself b) your clients P.S. Jeff and company, why doesn't stack overflow validate? :) EDIT: Some good insights, I think that since I've been so valid-obsessed for so long I program knowing what will cause problems and what won't so I'm in a better position than people who create a site first and then "go back and fix the validation problems" I think I may post another question on stack overflow; "Do you validate as you go or do you finish and then go back and validate?" as that seems to be where this question is going

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >