Search Results

Search found 264 results on 11 pages for 'chad larson'.

Page 8/11 | < Previous Page | 4 5 6 7 8 9 10 11  | Next Page >

  • jQuery UI datepicker customization

    - by Chad
    I have the jQuery datepicker working, but I need to be able to select more than just dates. I need to be able to select between some strings as well "Yesterday" and "Today" to be precise. So, the underlying input can contain any date as well as the strings "Yesterday" or "Today". Is there some way I can do this by tweaking the existing jQuery UI datepicker?

    Read the article

  • Can I override DropLocation target to avoid network latency?

    - by Chad
    In Team Build 2008, the Drop Location for a build is no longer specified in the .proj file, and instead is stored in the database and maintained in the GUI tool. The GUI tool only accepts a network path as a drop location (i.e. \\server\share) and will not accept a local path. Our build server also hosts the dropped files, so it seems that forcing a file copy operation to go through the network share introduces a lot of lag time when copying a large number of files. I would like to override this feature so that I can specify a local directory for drop location, but I can't figure out how.

    Read the article

  • Git: changes not reflecting on other checkouts - huh?

    - by Chad Johnson
    Okay, so, I have my branches (git branch -a): * chat master remotes/origin/HEAD -> origin/master remotes/origin/chat I make changes (still with the 'chat' branch checkout out), commit, and push. I go to my server, on which I have a clone of the repository, and I do a fetch: git getch then I switch to the chat branch: git checkout --track -b chat origin/chat and I event do a pull, just to make sure everything is up to date: git pull and my changes from my other computer are NOT. THERE. What the heck am I doing wrong? If I had hair, I would have pulled it out. Thankfully I am bald. When I try a 'git commit' again, I get this # On branch chat # Changed but not updated: # (use "git add/rm <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: app/controllers/chat_controller.rb # modified: app/views/dashboard/index.html.erb # modified: app/views/dashboard/layout.js.erb # modified: app/views/layouts/dashboard.html.erb # deleted: app/views/project/.tmp_edit.html.erb.55742~ # deleted: app/views/project/.tmp_edit.html.erb.83482~ # modified: public/stylesheets/dashboard/layout.css # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # .loadpath # .project # config/database.yml # config/environments/development.yml # config/environments/production.yml # config/environments/test.yml # log/ no changes added to commit (use "git add" and/or "git commit -a")

    Read the article

  • In ASP.NET MVC (3.0/Razor), do you prefer multiple views, or conditionals within views? Why?

    - by Chad
    For my new web app, I'm debating on using multiple views, or conditionals within views. An example scenario would be showing different info to users who are authenticated vs non-authenticated. This could be handled a couple ways. In the controller, check IsAuthenticated and return a view based on that In the view, check IsAuthenticated and show blocks of info based on that Pros of multiple views: Smaller, less complicated view - next to no logic in the view Pros of single views: less view files to maintain The obvious cons are the opposites of the pros: more files to maintain or more complicated view files. Which do you prefer? Why? Any pros/cons I haven't outlined here? Update: Assume each view uses a layout page and partial views to abstract the obviously repetitive code.

    Read the article

  • How can I convert a C# application that uses StandardInput and StandardOutput in .NET Compact Framew

    - by Chad
    I have an application that uses the Process class, StandardInput, and StandardOutput to communicate with an external executable. I am using this to pass strings back and forth, and it works well in my Windows application. On the mobile device, I do not see StandardInput or StandardOutput in the Process class. Is there an easy way to replicate this functionality on the Mobile?

    Read the article

  • How do I keep my branches up to date with the 'default' branch under Mercurial?

    - by Chad Johnson
    Let's say I have the following workflow with Mercurial: stable (clone on server) default (branch) development (clone on server) default (branch) bugs (branch) developer1 (clone on local machine) developer2 (clone on local machine) developer3 (clone on local machine) feature1 (branch) developer3 (clone on local machine) feature2 (branch) developer1 (clone on local machine) developer2 (clone on local machine) My main line of development which is always in a release ready state is 'default'. So the 'default' branch in the 'development' clone is always release-ready. Now suppose I'm developer1 working on feature2. And let's say also that feature2 takes several months. It's pretty obvious that I'm going to want to keep my 'feature2' branch up to date with the 'default' branch. Does this make sense? How would I go about doing this with Mercurial?

    Read the article

  • Why use a Rails-like deployment mechanism over 'git pull' for releasing?

    - by Chad Johnson
    To release my centralized webapp, I COULD have a vhost pointed to some directory and then just do a 'git pull' when I want to release, updating the files. But Rails has a different deployment mechanism: it copies files to a subdirectory and then points a symlink ('current') to that new subdirectory. I understand that it probably more acceptable to do a Rails-like deployment because the release is built in some directory, and then the symlink is pointed to that directory, so this is much faster, and it's less likely that users would experience weird issues while a release is happening. Are there any other advantages to the Rails approach? Or, is a 'git pull' approach actually more widely accepted?

    Read the article

  • Share your conky tips / scripts / .conkyrc

    - by Chad Birch
    I've just started tinkering with conky, and I'm hoping the StackOverflow crowd can share some of the cool things they've done with this tool. Scripts and .conkyrc files specifically geared towards developers would be especially good to see. Some good examples of developer-centric functions would be repository-monitors or heck, even something that monitors StackOverflow. Screenshots of what the functionality actually looks like would be appreciated as well.

    Read the article

  • Tracking DB changes with Zend Framework?

    - by Chad Johnson
    I am trying to decide between the Zend Framework and Ruby On Rails for my web application. If I go with ZF, I need the following: A way to incrementally track changes to my database, as with RoR's migration feature (001_something.sql, 002_something_else.sql). A place to put SQL for the next release of my software. At work in our custom PHP solution, we just have release.sql, which gets run, archived, and blanked out upon release. ZF has Zend_Db_Schema_Manager, which does the same thing, but I'm not interested as its not official, complete, or maintained. Is there an official mechanism that ZF provides for doing something similar to what I described? EDIT I ended up going with Rails. Nothing compares.

    Read the article

  • Affordable, Stable, ASP.NET MVC Hosting Exist?

    - by Chad
    I'm using webhost4life shared hosting right now. They have a 99.99% up-time guarantee, but it is definitely not. Their support has been good when I do contact them, but it's just not stable. The site will just go down at random times for 5-10 minutes at a time. I know I'm on shared hosting, but I was hoping it would be more stable than it is. My app isn't at the point where it would need dedicated hosting yet, if the shared was stable enough. Any affordable hosting that you can vouch for (that supports ASP.NET MVC)?

    Read the article

  • Variable collation with MySQL stored function?

    - by Chad Johnson
    I want to do something like this in a stored procedure: IF case_sensitive = FALSE THEN SET search_collation = "utf8_unicode_ci"; ELSE SET search_collation = "utf8_bin"; END IF; INSERT INTO TABLE1 (field1, field2) SELECT * FROM TABLE 2 WHERE some_field LIKE '%rarf%' collate search_collation; However, when I do this, I get ERROR 1273 (HY000): Unknown collation: 'search_collation' Also, if I do what's suggested at http://stackoverflow.com/questions/1680850/mysql-stored-procedures-use-a-variable-as-the-database-name-in-a-cursor-declara/2070021#2070021 I get Dynamic SQL is not allowed in stored function or trigger How can I use a dynamic collation?

    Read the article

  • Stored procs breaking overnight

    - by Chad
    We are running MS SQL 2005 and we have been experiencing a very peculiar problem the past few days. I have two procs, one that creates an hourly report of data. And another that calls it, puts its results in a temp table, and does some aggregations, and returns a summary. They work fine...until the next morning. The next morning, suddenly the calling report, complains about an invalid column name. The fix, is simply a recompile of the calling proc, and all works well again. How can this happen? It's happened three nights in a row since moving these procs into production.

    Read the article

  • Storage drives is causting system crash

    - by Chad
    I'm running Centos 5.4 with 750GB(ntfs) and 2TB drives for storage. Originally I installed the 750, everything seemed fine and then I installed the 2TB drive with NTFS already partitioned. I noticed when I would copy a lot of videos it would crash (no mouse or response from server) about 20min into it. After doing some troubleshooting I noticed the 750 would also crash when doing the same task so I decided that NTFS may be the problem. I unmounted the 2TB drive and tried to partition and format it using ext2 but when using parted it would crash at this point "writing inode tables". Looking at the dmesg logs I believe this is the error "mtrr: type mismatch for e0000000,10000000 old: write-back new: write-combining". Any idea as to what could be causing this?

    Read the article

  • How to store and compare time-zone sensitive times

    - by Chad Moran
    I have a data structure where an entity has times stored as an int (minutes into the day) for fast comparison. The entity also has a Foreign Key reference back to a TimeZone table which contains the .NET CLR ID Name and it's Standard Time/Daylight Time acronyms. Since this information is stored as time-zone insensitive - I was wondering how in LINQ to SQL I could convert this into a UTC DateTime for comparison against other times that will be in UTC. Just to be clear this conversion has to be done server-side so that I can execute filtering on the SQL Server and not the client. I am using .NET 3.5 SP1 and SQL Server 2008.

    Read the article

  • GCC compiling a dll with __stdcall

    - by Chad
    When we compile a dll using __stdcall inside visual studio 2008 the compiled function names inside the dll are. FunctionName Though when we compile the same dll using GCC using wx-dev-cpp GCC appends the number of paramers the function has, so the name of the function using Dependency walker looks like. FunctionName@numberOfParameters or == FunctionName@8 How do you tell GCC compiler to remove @nn from exported symbols in the dll?

    Read the article

  • PHP: creating a smooth edged circle, image or font?

    - by Chad Whitaker
    I'm making a PHP image script that will create circles at a given radius. I used: <?php imagefilledellipse ( $image, $cx, $cy, $w, $h, $color ); ?> but hate the rough edges it produces. So I was thinking of making or using a circle font that I will output using: <?php imagettftext ( $image, $size, $angle, $x, $y, $color, 'fontfile.ttf', $text ); ?> So that the font will produce a circle that has a smooth edge. My problem is making the "font size" match the "radius size". Any ideas? Or maybe a PHP class that will produce a smooth edge on a circle would be great! Thank you.

    Read the article

  • set map in google maps with TimerTask

    - by Chad White
    I would like to change the Position of the map in google maps v2 But Ive done it in a TimerTask ... target, zoom, bearing and so on and it says "IllegalStateException - not on the main thread What should I do? Any help? class Task extends TimerTask { @Override public void run() { CameraPosition cameraPosition = new CameraPosition.Builder() .target(Zt) // Sets the center of the map to Mountain View .zoom(12) // Sets the zoom .bearing(180) // Sets the orientation of the camera to east .tilt(30) // Sets the tilt of the camera to 30 degrees .build(); // Creates a CameraPosition from the builder mMap.moveCamera(CameraUpdateFactory.newCameraPosition(cameraPosition)); } } Timer timer = new Timer(); timer.scheduleAtFixedRate(new Task(), 0, 20000);

    Read the article

  • How to deploy to multiple redundant production servers with "cap deploy"?

    - by Chad Johnson
    Capistrano is working great to deploy to a single server. However, I have multiple production API servers for my web application. When I deploy, my code needs to get deployed to every API server at once. Specifying each server manually is NOT the solution I am looking for (e.g. I don't want to do "cap api1 deploy; cap api2 deploy"). Is there a way, using Capistrano, to deploy to all servers at once, with just a simple "cap deploy"? I'm wondering what changes I would need to make to a typical deploy.rb file, whether I'd need to create a separate file for each server, and whether and how the Capfile would need to be changed. Also, I need to be able to specify a different deploy_to path for each server. And ideally, I wouldn't have to repeat things in different config files for different servers (eg. wouldn't have to specify :repository, :application, etc. multiple times). I have spent hours searching Google on this and looking through tutorials, but I have found nothing helpful. Here is a snippet from my current deploy.rb file: set :application, "testapplication" set :repository, "ssh://domain.com//srv/hg/#{application}" set :scm, :mercurial set :deploy_to, "/srv/www/#{application}" role :web, "domain.com" role :app, "domain.com" role :db, "domain.com", :primary => true, :norelease => true Should I just use the multistage extension and do this? task :deploy_everything do system "cap api1 deploy" system "cap api2 deploy" system "cap api2 deploy" end That could work, but I feel like this isn't what this extension is meant for...

    Read the article

  • Always-indexed MySQL indexing/searching replacements for InnoDB?

    - by Chad Johnson
    I am using InnoDB for a MySQL table, and obviously queries using LIKE and RLIKE/REGEXP can take a lot of time. I've tried Spinx, and it works great, except I have to re-index context at intervals. I can re-index every minute, but I am wondering if there is either 1) a setting in Sphinx to keep records always indexed or 2) other software besides Sphinx that will keep records always indexed. I want it where that immediately upon inserting or updating a record, the index is updated.

    Read the article

  • Can you in any way interface Ruby Gems with PHP, Python, etc.?

    - by Chad Johnson
    Stupid question, and forgive me for asking, but someone is asking me, and I am not a super expert with Rails yet. Suppose I have some Rails gem I write. Now suppose a customer has some other framework, like Django or CakePHP, and I want to provide the functionality offered by my gem (eg. CRUD for automotive data) to them as a module in their framework. Could I somehow make it so they could interface my gem with Django or CakePHP? Obviously I could do something with some API magic--and I'll probably end up going that route. But I just want to know whether there is a way to directly interface with Gems from a non-Rails application.

    Read the article

  • Cache an FTP connection via session variables for use via AJAX?

    - by Chad Johnson
    I'm working on a Ruby web Application that uses the Net::FTP library. One part of it allows users to interact with an FTP site via AJAX. When the user does something, and AJAX call is made, and then Ruby reconnects to the FTP server, performs an action, and outputs information. Every time the AJAX call is made, Ruby has to reconnect to the FTP server, and that's slow. Is there a way I could cache this FTP connection? I've tried caching in the session hash, but "We're sorry, but something went wrong" is displayed, and a TCP dump is outputted in my logs whenever I attempt to store it in the session hash. I haven't tried memcache yet. Any suggestions?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11  | Next Page >