Search Results

Search found 212 results on 9 pages for 'jerry blair'.

Page 2/9 | < Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >

  • Returning Value of Radio Button Jquery [migrated]

    - by Jerry Walker
    I am trying to figure out why, when I run this code, I am getting undefined for my correct answers. $(document).ready (function () { // var answers = [["Fee","Fi","Fo"], ["La","Dee","Da"]], questions = ["Fee-ing?", "La-ing?"], corAns = ["Fee", "La"]; var counter = 0; var $facts = $('#main_ .facts_div'), $question = $facts.find('.question'), $ul = $facts.find('ul'), $btn = $('.myBtn'); $btn.on('click', function() { if (counter < questions.length) { $question.text(questions[counter]); var ansstring = $.map(answers[counter], function(value) { return '<li><input type="radio" name="ans" value="0"/>' + value + '</li>'}).join(''); $ul.html(ansstring); var currentAnswers = $('input[name="ans"]:checked').map(function() { return this.val(); }).get(); var correct = 0; if (currentAnswers[counter]==corAns[counter]) { correct++; } } else { $facts.text('You are done with the quiz ' + correct); $(this).hide(); } counter++; }); // }); It is quite long and I'm sorry about that, but I don't really know how tostrip it down. I also realize this isn't the most elegant way to do this, but I just want to know why I can't seem to get my radio values. I will add the markup as well if anyone wants.

    Read the article

  • Can't log in on boot up

    - by Jerry Donnelly
    I set this computer up with Ubuntu for my neighbor about two years ago. Today she tried her normal boot up and log in and her password isn't accepted. I've double checked and she's using what I set her up to use, the caps lock key is okay, and I can't see any other reason for the problem. I'm not sure exactly what version of Ubuntu she has and I'm not an expert user myself. Is there a way to bypass the password screen on boot up that would let me get to Ubuntu and perhaps set her up as another user? She basically checks email and that's about it. Thanks for any assistance.

    Read the article

  • Finding the file that is on a bad block on a HFS+ volume (debugfs for HFS+)

    - by Blair Zajac
    I have a drive in our iMac that has bad blocks, as booting from an Ubuntu 11.10 live CD and using ddrescue -f /dev/sda /dev/null finds them. I'd like to get the drive to remap them by writing to the blocks, say using hdparm --write-sector, but I don't want to do this without knowing what's in those blocks and finding the file that owns them, so I can restore the file from another source. I found fileXray but don't feel like spending $79 to map a block to a file and hfsdebug has been taken offline. Are there suggestions on a tool or technique to use? I looked at all the Ubuntu HFS+ packages to see if they could provide this info but nothing jumped out at me. BTW, I used Disk Utility to erase the empty space, but it didn't get any of the bad blocks to be remapped, according to smartctl -A.

    Read the article

  • Which web site gives the most accurate indication of a programmer's capabilities?

    - by Jerry Coffin
    If you were hiring programmers, and could choose between one of (say) the top 100 coders on topcoder.com, or one of the top 100 on stackoverflow.com, which would you choose? At least to me, it would appear that topcoder.com gives a more objective evaluation of pure ability to solve problems and write code. At the same time, despite obvious technical capabilities, this person may lack any hint of social skills -- he may be purely a "lone coder", with little or no ability to help/work with others, may lack mentoring ability to help transfer his technical skills to others, etc. On the other hand, stackoverflow.com would at least appear to give a much better indication of peers' opinion of the coder in question, and the degree to which his presence and useful and helpful to others on the "team". At the same time, the scoring system is such that somebody who just throws up a lot of mediocre (or even poor answers) will almost inevitably accumulate a positive total of "reputation" points -- a single up-vote (perhaps just out of courtesy) will counteract the effects of no fewer than 5 down-votes, and others are discouraged (to some degree) from down-voting because they have to sacrifice their own reputation points to do so. At the same time, somebody who makes little or no technical contribution seems unlikely to accumulate a reputation that lands them (even close to) the top of the heap, so to speak. So, which provides a more useful indication of the degree to which this particular coder is likely to be useful to your organization? If you could choose between them, which set of coders would you rather have working on your team?

    Read the article

  • Python unittest (using SQLAlchemy) does not write/update database?

    - by Jerry
    Hi, I am puzzled at why my Python unittest runs perfectly fine without actually updating the database. I can even see the SQL statements from SQLAlchemy and step through the newly created user object's email -- ...INFO sqlalchemy.engine.base.Engine.0x...954c INSERT INTO users (user_id, user_name, email, ...) VALUES (%(user_id)s, %(user_name)s, %(email)s, ...) ...INFO sqlalchemy.engine.base.Engine.0x...954c {'user_id': u'4cfdafe3f46544e1b4ad0c7fccdbe24a', 'email': u'[email protected]', ...} > .../tests/unit_tests/test_signup.py(127)test_signup_success() -> user = user_q.filter_by(user_name='test').first() (Pdb) n ...INFO sqlalchemy.engine.base.Engine.0x...954c SELECT users.user_id AS users_user_id, ... FROM users WHERE users.user_name = %(user_name_1)s LIMIT 1 OFFSET 0 ...INFO sqlalchemy.engine.base.Engine.0x...954c {'user_name_1': 'test'} > .../tests/unit_tests/test_signup.py(128)test_signup_success() -> self.assertTrue(isinstance(user, model.User)) (Pdb) user <pweb.models.User object at 0x9c95b0c> (Pdb) user.email u'[email protected]' Yet at the same time when I login to the test database, I do not see the new record there. Is it some feature from Python/unittest/SQLAlchemy/Pyramid/PostgreSQL that I'm totally unaware of? Thanks. Jerry

    Read the article

  • Post-Purchase Social Media

    - by David Dorf
    When you make a particularly good purchase, the natural tendency is to share the experience with friends. You show them your cool new toy or garment, then explain how you discovered such a great deal, all the while implying you are the world's most savvy shopper. My wife does it with clothes, housewares, and books, and I do it with wiz-bang techie stuff. Post-purchase euphoria or Buyer's remorse are associated with most purchases beyond day-to-day needs. So now let's add social media to the mix. Haul videos are a YouTube phenomenon where a shopper describes their latest haul on video. Blair Fowler, aka juicystar07, is an excellent example. She and her older sister's haul videos have been viewed 75,000,000 times, at times causing particular items to sell out after being showcased. If you're not already on this bandwagon, checkout Blair's haul video from her trip to Forever21. There are a couple good articles on this trend from ABC's GMA, Slate, and NPR. Some retailers are already sending free products to these fashionistas in the hopes they'll be reviewed on camera. For those less willing to exert themselves, there's Blippy, a service that automatically tweets your purchases. Similar to Twitter, your purchases are tweeted so your friends can see what you've purchased and your network can make comments. In the example to the right, co-founder Philip Kaplan purchased a gift for his wife from the store Does Your Mother Know, proving the point that the need for privacy is overblown. Blippy has partnerships with selected merchants like Apple, Amazon, and Netflix and can also get purchases from the credit cards you've registered. When you register, you can configure whether to automatically tweet each purchase, or approve them first. No sense in broadcasting my need for Rogaine, right? This is a good thing for retailers, as it helps spread the word about purchases and gives other people ideas. Rick just bought an ooma from Amazon. What the heck is ooma? Oh, its like Vonage but no monthly bills. I'm there.

    Read the article

  • Partial specialization with reference template parameter fails to compile in VS2005

    - by Blair Holloway
    I have code that boils down to the following: template struct Foo {}; template & I struct FooBar {}; //////// template struct Baz {}; template & I struct Baz< FooBar { static void func(FooBar& value); }; //////// struct MyStruct { static const Foo s_floatFoo; }; // Elsewhere: const Foo MyStruct::s_floatFoo; void callBaz() { typedef FooBar FloatFooBar; FloatFooBar myFloatFooBar; Baz::func(myFloatFooBar); } This compiles successfully under GCC, however, under VS2005, I get: error C2039: 'func' : is not a member of 'Baz' with [ T=FloatFooBar ] error C3861: 'func': identifier not found However, if I change const Foo<T>& I to const Foo<T>* I (passing I by pointer rather than by reference), and defining FloatFooBar as: typedef FooBar FloatFooBar; Both GCC and VS2005 are happy. What's going on? Is this some kind of subtle template substitution failure that VS2005 is handling differently to GCC, or a compiler bug? (The strangest thing: I thought I had the above code working in VS2005 earlier this morning. But that was before my morning coffee. I'm now not entirely certain I wasn't under some sort of caffeine-craving-induced delirium...)

    Read the article

  • How to use pthread_atfork() and pthread_once() to reinitialize mutexes in child processes

    - by Blair Zajac
    We have a C++ shared library that uses ZeroC's Ice library for RPC and unless we shut down Ice's runtime, we've observed child processes hanging on random mutexes. The Ice runtime starts threads, has many internal mutexes and keeps open file descriptors to servers. Additionally, we have a few of mutexes of our own to protect our internal state. Our shared library is used by hundreds of internal applications so we don't have control over when the process calls fork(), so we need a way to safely shutdown Ice and lock our mutexes while the process forks. Reading the POSIX standard on pthread_atfork() on handling mutexes and internal state: Alternatively, some libraries might have been able to supply just a child routine that reinitializes the mutexes in the library and all associated states to some known value (for example, what it was when the image was originally executed). This approach is not possible, though, because implementations are allowed to fail *_init() and *_destroy() calls for mutexes and locks if the mutex or lock is still locked. In this case, the child routine is not able to reinitialize the mutexes and locks. On Linux, the this test C program returns EPERM from pthread_mutex_unlock() in the child pthread_atfork() handler. Linux requires adding _NP to the PTHREAD_MUTEX_ERRORCHECK macro for it to compile. This program is linked from this good thread. Given that it's technically not safe or legal to unlock or destroy a mutex in the child, I'm thinking it's better to have pointers to mutexes and then have the child make new pthread_mutex_t on the heap and leave the parent's mutexes alone, thereby having a small memory leak. The only issue is how to reinitialize the state of the library and I'm thinking of reseting a pthread_once_t. Maybe because POSIX has an initializer for pthread_once_t that it can be reset to its initial state. #include <pthread.h> #include <stdlib.h> #include <string.h> static pthread_once_t once_control = PTHREAD_ONCE_INIT; static pthread_mutex_t *mutex_ptr = 0; static void setup_new_mutex() { mutex_ptr = malloc(sizeof(*mutex_ptr)); pthread_mutex_init(mutex_ptr, 0); } static void prepare() { pthread_mutex_lock(mutex_ptr); } static void parent() { pthread_mutex_unlock(mutex_ptr); } static void child() { // Reset the once control. pthread_once_t once = PTHREAD_ONCE_INIT; memcpy(&once_control, &once, sizeof(once_control)); setup_new_mutex(); } static void init() { setup_new_mutex(); pthread_atfork(&prepare, &parent, &child); } int my_library_call(int arg) { pthread_once(&once_control, &init); pthread_mutex_lock(mutex_ptr); // Do something here that requires the lock. int result = 2*arg; pthread_mutex_unlock(mutex_ptr); return result; } In the above sample in the child() I only reset the pthread_once_t by making a copy of a fresh pthread_once_t initialized with PTHREAD_ONCE_INIT. A new pthread_mutex_t is only created when the library function is invoked in the child process. This is hacky but maybe the best way of dealing with this skirting the standards. If the pthread_once_t contains a mutex then the system must have a way of initializing it from its PTHREAD_ONCE_INIT state. If it contains a pointer to a mutex allocated on the heap than it'll be forced to allocate a new one and set the address in the pthread_once_t. I'm hoping it doesn't use the address of the pthread_once_t for anything special which would defeat this. Searching comp.programming.threads group for pthread_atfork() shows a lot of good discussion and how little the POSIX standards really provides to solve this problem. There's also the issue that one should only call async-signal-safe functions from pthread_atfork() handlers, and it appears the most important one is the child handler, where only a memcpy() is done. Does this work? Is there a better way of dealing with the requirements of our shared library?

    Read the article

  • OPOS not working on 64 bit system

    - by Blair Mahaffy
    Anyone have experience with OPOS? I can't get my app to recognize the LDNs for the devices running on a 64 bit machine. I've got down to the point where I know that the OleforRetail stuff is now under Wow6432Node in the Registry. I suspect the common controls can't find the LDN because of this. Is there any kind of workaround? Failing that, is there a centralized OPOS development forum somewhere? BTW: I work with the common controls supplied by Monroe Consulting. Thanks!

    Read the article

  • Best SVN Tools

    - by pete blair
    Just wanted to see what tools for SVN people use, perhaps i can find some new cool ones. Im pretty much standard right now, ankh and tortoise. See also http://stackoverflow.com/questions/372687/good-visual-studio-svn-tool

    Read the article

  • possible to save pdf with .net from populated form by .fdf

    - by Blair Jones
    I have a process that creates form data in the form of an .fdf file that then has a reference to the .pdf document that is its "parent". is it possible with any .NET process to save that .pdf file (populated with the data that came in from the .fdf)? I need this because I need to email the fully populated .pdf documents out. I've been just sending the .fdfs with fully qualified links to the pdfs, but some people are having problems with it and I'd rather just go full-blown pdf if I can do it. FYI, my server does have a licensed copy of Acrobat installed if that matters....

    Read the article

  • Select highest rated, oldest track

    - by Blair McMillan
    I have several tables: CREATE TABLE [dbo].[Tracks]( [Id] [uniqueidentifier] NOT NULL, [Artist_Id] [uniqueidentifier] NOT NULL, [Album_Id] [uniqueidentifier] NOT NULL, [Title] [nvarchar](255) NOT NULL, [Length] [int] NOT NULL, CONSTRAINT [PK_Tracks_1] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] CREATE TABLE [dbo].[TrackHistory]( [Id] [int] IDENTITY(1,1) NOT NULL, [Track_Id] [uniqueidentifier] NOT NULL, [Datetime] [datetime] NOT NULL, CONSTRAINT [PK_TrackHistory] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] INSERT INTO [cooltunes].[dbo].[TrackHistory] ([Track_Id] ,[Datetime]) VALUES ("335294B0-735E-4E2C-8389-8326B17CE813" ,GETDATE()) CREATE TABLE [dbo].[Ratings]( [Id] [int] IDENTITY(1,1) NOT NULL, [Track_Id] [uniqueidentifier] NOT NULL, [User_Id] [uniqueidentifier] NOT NULL, [Rating] [tinyint] NOT NULL, CONSTRAINT [PK_Ratings] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] INSERT INTO [cooltunes].[dbo].[Ratings] ([Track_Id] ,[User_Id] ,[Rating]) VALUES ("335294B0-735E-4E2C-8389-8326B17CE813" ,"C7D62450-8BE6-40F6-80F1-A539DA301772" ,1) Users User_Id|Guid Other fields Links between the tables are pretty obvious. TrackHistory has each track added to it as a row whenever it is played ie. a track will appear in there many times. Ratings value will either be 1 or -1. What I'm trying to do is select the Track with the highest rating, that is more than 2 hours old, and if there is a duplicate rating for a track (ie a track receives 6 +1 ratings and 1 - rating, giving that track a total rating of 5, another track also has a total rating of 5), the track that was last played the longest ago should be returned. (If all tracks have been played within the last 2 hours, no rows should be returned) I'm getting somewhere doing each part individually using the link above, SUM(Value) and GROUP BY Track_Id, but I'm having trouble putting it all together. Hopefully someone with a bit more (MS)SQL knowledge will be able to help me. Many thanks!

    Read the article

  • "Disabling" an HTML table with javascript

    - by Blair Jones
    I've seen this done in a lot of sites recently, but can't seem to track one down. Essentially I want to "disable" an entire panel (that's in the form on an HTML table) when a button is clicked. By disable I mean I don't want the form elements within the table to be usable and I want the table to sort of fade out. I've been able to accomplish this by putting a "veil" over the table with an absolutely positioned div that has a white background with a low opacity (so you can see the table behind it, but can't click anything because the div is in front of it). This also adds the faded effect that I want. However, when I set the height of the veil to 100% it only goes to the size of my screen (not including the scrolling), so if a user scrolls up or down, they see the edge of the veil and that's not pretty. I'm assuming this is typically done in a different fashion. Does anyone have some suggestions as a better way to accomplish this?

    Read the article

  • Naming a typedef for a boost::shared_ptr<const Foo>

    - by Blair Zajac
    Silly question, but say you have class Foo: class Foo { public: typedef boost::shared_ptr<Foo> RcPtr; void non_const_method() {} void const_method() const {} }; Having a const Foo::RcPtr doesn't prevent non-const methods from being invoked on the class, the following will compile: #include <boost/shared_ptr.hpp> int main() { const Foo::RcPtr const_foo_ptr(new Foo); const_foo_ptr->non_const_method(); const_foo_ptr->const_method(); return 0; } But naming a typedef ConstRcPtr implies, to me, that the typedef would be typedef const boost::shared_ptr<Foo> ConstRcPtr; which is not what I'm interested in. An odder name, but maybe more accurate, is RcPtrConst: typedef boost::shared_ptr<const Foo> RcPtrConst; However, Googling for RcPtrConst gets zero hits, so people don't use this as a typedef name :) Does anyone have any other suggestions?

    Read the article

  • What do I name this class whose sole purpose is to report failure?

    - by Blair Holloway
    In our system, we have a number of classes whose construction must happen asynchronously. We wrap the construction process in another class that derives from an IConstructor class: class IConstructor { public: virtual void Update() = 0; virtual Status GetStatus() = 0; virtual int GetLastError() = 0; }; There's an issue with the design of the current system - the functions that create the IConstructor-derived classes are often doing additional work which can also fail. At that point, instead of getting a constructor which can be queried for an error, a NULL pointer is returned. Restructuring the code to avoid this is possible, but time-consuming. In the meantime, I decided to create a constructor class which we create and return in case of error, instead of a NULL pointer: class FailedConstructor : public IConstructor public: virtual void Update() {} virtual Status GetStatus() { return STATUS_ERROR; } virtual int GetLastError() { return m_errorCode; } private: int m_errorCode; }; All of the above this the setup for a mundane question: what do I name the FailedConstructor class? In our current system, FailedConstructor would indicate "a class which constructs an instance of Failed", not "a class which represents a failed attempt to construct another class". I feel like it should be named for one of the design patterns, like Proxy or Adapter, but I'm not sure which.

    Read the article

  • Get the signed/unsigned variant of an integer template parameter without explicit traits

    - by Blair Holloway
    I am looking to define a template class whose template parameter will always be an integer type. The class will contain two members, one of type T, and the other as the unsigned variant of type T -- i.e. if T == int, then T_Unsigned == unsigned int. My first instinct was to do this: template <typename T> class Range { typedef unsigned T T_Unsigned; // does not compile public: Range(T min, T_Unsigned range); private: T m_min; T_Unsigned m_range; }; But it doesn't work. I then thought about using partial template specialization, like so: template <typename T> struct UnsignedType {}; // deliberately empty template <> struct UnsignedType<int> { typedef unsigned int Type; }; template <typename T> class Range { typedef UnsignedType<T>::Type T_Unsigned; /* ... */ }; This works, so long as you partially specialize UnsignedType for every integer type. It's a little bit of additional copy-paste work (slash judicious use of macros), but serviceable. However, I'm now curious - is there another way of determining the signed-ness of an integer type, and/or using the unsigned variant of a type, without having to manually define a Traits class per-type? Or is this the only way to do it?

    Read the article

  • Stage untracked files for commit without staging tracked file changes

    - by Blair Holloway
    Oftentimes, when using git, I find myself in this situation: I have changes to several files, but I only want to commit parts of them. I have added several untracked files, which I want to track and commit. Solving the first part is easy; I run: git add -p Then, I choose which hunks to stage, and which hunks remain in my working tree, but unstaged. However, git's patch mode skips over untracked files. What I would like to do is something like: git add --untracked But no such option appears to exist. If I have, say, six untracked files, I could stage them using add in interactive mode and the add untracked option, like so: git add -i a<CR> 1<CR> 2<CR> 3<CR> 4<CR> 5<CR> 6<CR> <CR> q<CR> I feel like there is, or should be, a quicker way of doing this, though. What am I missing?

    Read the article

  • Whole Lotta Virtualization Goin' On

    - by rickramsey
    Lately we've published a lot of content about virtualization. Here's a sampling. Podcat: Technology Preview of Transcendent Memory Turns out that in a virtual environment, RAM is the bottleneck. Not because it's slow, it's not, but because each CPU still had to use its own RAM. Which gets expensive. In this podcast, Dan Magenheimer describes how Oracle and the open source community taught the guest kernel in Oracle Linux to share its memory with other CPU's. Transcendent memory will wind up saving large data centers a lot of money. Find out how. Tech Article: How to Use Oracle VM Templates This article describes how to prepare an Oracle VM environment to use Oracle VM Templates, how to obtain a template, and how to deploy the template to your Oracle VM environment. It also describes how to create a virtual machine based on that template and how you can clone the template and change the clone's configuration. Tech Article: How to Set Up a Load Balanced Application Across Two Oracle Solaris Zones Install Apache Tomcat on two Oracle Solaris zones. Connect them across a VPN. And let the Integrated Load Balancer in Oracle Solaris 11 manage traffic. Presto: high(er) availability in a single server. Tech Article: How to Install Oracle RAC on Oracle Solaris Zone Clusters Learn how to implement a multi-tiered database environment that isolates database tiers and administrative domains, while taking advantage of centralized (and simpler) cluster admin. For fans of Jerry Lee Lewis If you're a fan of Jerry Lee Lewis, you might enjoy this video. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Are there any real-world cases for C++ without exceptions?

    - by Martin
    In When to use C over C++, and C++ over C? there is a statement wrt. to code size / C++ exceptions: Jerry answers (among other points): (...) it tends to be more difficult to produce truly tiny executables with C++. For really small systems, you're rarely writing a lot of code anyway, and the extra (...) to which I asked why that would be, to which Jerry responded: the main thing is that C++ includes exception handling, which (at least usually) adds some minimum to the executable size. Most compilers will let you disable exception handling, but when you do the result isn't quite C++ anymore. (...) which I do not really doubt on a technical real world level. Therefore I'm interested (purely out of curiosity) to hear from real world examples where a project chose C++ as a language and then chose to disable exceptions. (Not just merely "not use" exceptions in user code, but disable them in the compiler, so that you can't throw or catch exceptions.) Why does a project chose to do so (still using C++ and not C, but no exceptions) - what are/were the (technical) reasons? Addendum: For those wishing to elaborate on their answers, it would be nice to detail how the implications of no-exceptions are handled: STL collections (vector, ...) do not work properly (allocation failure cannot be reported) new can't throw Constructors cannot fail

    Read the article

  • IIS 7.5, Multiple Application Pools, and URL Rewriting (403.18 -- Forbidden)

    - by Jerry Hewett
    Is there any way to configure IIS 7.5 to perform URL rewrites to different application pools on the same site without running into a 403.18 error? We're using Helicon ISAPI Rewrite 3 on IIS 6 and it's working like a charm. The root-level "application" is running under it's own application pool, and on IIS 6 we have no problems doing URL rewrites from that application pool to any one of the other four application pools. But when I copy the same server configuration information over to IIS 7.5 the URL rewrites to any of the other application pools fail with a "403.18 -- Forbidden" error. The weird bit is that the IIS 6 is not (at least as far as I can tell, by looking at the site Service configuration dialog) running under IIS 5 emulation mode, so somehow the rewrites aren't throwing 403.18 errors. So something must be different... but whatever it is, I sure haven't been able to figure it out. Btw, we're not married to Helicon ISAPI Rewrite. If there's another way to preserve our current rewrite configuration rules using another module or method I'd be more than happy to use it.

    Read the article

  • IIS 7.5, Multiple Application Pools, and URL Rewriting (403.18 -- Forbidden)

    - by Jerry Hewett
    Is there any way to configure IIS 7.5 to perform URL rewrites to different application pools on the same site without running into a 403.18 error? We're using Helicon ISAPI Rewrite 3 on IIS 6 and it's working like a charm. The root-level "application" is running under it's own application pool, and on IIS 6 we have no problems doing URL rewrites from that application pool to any one of the other four application pools. But when I copy the same server configuration information over to IIS 7.5 the URL rewrites to any of the other application pools fail with a "403.18 -- Forbidden" error. The weird bit is that the IIS 6 is not (at least as far as I can tell, by looking at the site Service configuration dialog) running under IIS 5 emulation mode, so somehow the rewrites aren't throwing 403.18 errors. So something must be different... but whatever it is, I sure haven't been able to figure it out. Btw, we're not married to Helicon ISAPI Rewrite. If there's another way to preserve our current rewrite configuration rules using another module or method I'd be more than happy to use it.

    Read the article

  • Command-line access to remote MySQL server

    - by Jerry Krinock
    I administer a website on a remote, shared host. My web host offers MySQL, and I am able to access this from my Mac OS X computer using a GUI program, Sequel Pro. That works great. But I want to script some queries, and Sequel Pro is not scriptable. What should I do? I've read about tunneling to mysql via SSH. I have shell access to the server, with an SSH key on my Mac, so ssh [email protected] -p 7978 gets me in. Should tunneling the MySQL port 3306 work? Like this? ssh [email protected] -L 3306:127.0.0.1:3306 (It "times out" after a minute.) Do I need to install mysql on my Mac?

    Read the article

  • How to install Web Deployment Agent

    - by Jerry
    I am trying to setup the TFS automated deploys. But I keep getting the following error message when running the deploy. It appears that I have a service called "Web Management Service", but the error message says that I need "We Deploy Agent Service". I tried installing Web Deploy 2.0, but the server said that I already had this installed. What can I do to fix this problem? Error Code: ERROR_DESTINATION_NOT_REACHABLE Could not connect to the destination computer ("myServer"). On the destination computer, make sure that Web Deploy is installed and that the required process ("Web Deployment Agent Service") is started. --Update-- Looks like the Web Deployment agent is not installed by default. I had to re-install MSDeploy, select Change or Custom, then add the Web Deploy Agent service. Now the deploy works correctly.

    Read the article

  • Where do deleted items go on the hard drive ?

    - by Jerry
    After reading the quote below on the Casey Anthony trial (CNN) ,I am curious about where deleted files actually go on a hard drive, how they can be seen after being deleted, and to what extent the data can be recovered (fully, partially, etc). "Earlier in the trial, experts testified that someone conducted the keyword searches on a desktop computer in the home Casey Anthony shared with her parents. The searches were found in a portion of the computer's hard drive that indicated they had been deleted, Detective Sandra Osborne of the Orange County Sheriff's Office testified Wednesday in Anthony's capital murder trial." I know some of the questions here on SO address third party software that can used for this kind of thing, but I'm more interested in how this data can be seen after deletion, where it resides on the hard drive, etc. I find the whole topic intriguing, so any additional insight is welcome.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >