Search Results

Search found 13451 results on 539 pages for 'physical environment'.

Page 469/539 | < Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >

  • Diffrernce between BackgroundWorker.ReportProgress() and Control.Invoke()

    - by ohadsc
    What is the difference between options 1 and 2 in the following? private void BGW_DoWork(object sender, DoWorkEventArgs e) { for (int i=1; i<=100; i++) { string txt = i.ToString(); if (Test_Check.Checked) //OPTION 1 Test_BackgroundWorker.ReportProgress(i, txt); else //OPTION 2 this.Invoke((Action<int, string>)UpdateGUI, new object[] {i, txt}); } } private void BGW_ProgressChanged(object sender, ProgressChangedEventArgs e) { UpdateGUI(e.ProgressPercentage, (string)e.UserState); } private void UpdateGUI(int percent, string txt) { Test_ProgressBar.Value = percent; Test_RichTextBox.AppendText(txt + Environment.NewLine); } Looking at reflector, the Control.Invoke() appears to use: this.FindMarshalingControl().MarshaledInvoke(this, method, args, 1); whereas BackgroundWorker.Invoke() appears to use: this.asyncOperation.Post(this.progressReporter, args); (I'm just guessing these are the relevant function calls.) If I understand correctly, BGW Posts to the WinForms window its progress report request, whereas Control.Invoke uses a CLR mechanism to invoke on the right thread. Am I close? And if so, what are the repercussions of using either ? Thanks

    Read the article

  • How to install Zend Framework on Windows

    - by sombe
    "installing Zend Framework is so easy!!!!" yeah right... Ok I'm working with a beginner's book and the ONE thing that is not excessively detailed is the most important part: Installing the darn thing. After browsing the quickstart guide for hours, all it said was: "download Zend [...] add the include directory (bla bla) and YOU'RE DONE!" right, i'm done using Zend. Ok, not really, not yet anyway. I beg of you people, I wanna go to bed, please tell me how (in simple 6th grade detail) to install the framework. I've got the unzipped folder in my htdocs directory, and I placed zf.bat+zf.php in the htdocs root. What's next? thank you so much. EDIT: Thanks guys for all the answers. Unfortunately I haven't been able to work with this or find a good enough resource to explain it to me in plain english. It seems that this framework adheres more so to programmers than to beginners. I've since yesterday read a little on CakePHP and found that it was incredibly easy to install and tune. As oppose to Zend Framework, where I had to dig in my "environment variables", configure "httpd.conf" and almost tie the knot between my computer driver cables to just get it running, CakePHP has already allowed me to put together a nice newbie application. In conclusion, I very much appreciate all of your help. I hope someone else venturing on ZF will be more successful with it. Thanks!

    Read the article

  • How to Store and Retrieve Images Using MsSQL (Server Management Studio)

    - by Joe Majewski
    I am having difficulties when trying to insert files into an MsSQL database. I'll try to break this down as best as I can: What data type should I be using to store image files (jpeg/png/gif/etc)? Right now my table is using the image data type, but I am curious if varbinary would be a better option. How would I go about inserting the image into the database? Does Microsoft SQL Server Management Studio have any built in functions that allow insertions of files into tables? If so, how is that done? Also, how could this be done through the use of an HTML form with PHP handling the input data and placing it into the table? How would I fetch the image from the table and display it on the page? I understand how to SELECT the cell's contents, but how would I go about translating that into a picture. Would I have to have a header(Content type: image/jpeg)? I have no problem doing any of these things with MySQL, but the MsSQL environment is still new to me, and I am working on a project for my job that requires the use of stored procedures to grab various data. Any and all help is appreciated. Thank you very much for your responses!

    Read the article

  • Boost link error when using "--layout=system" on VS2005

    - by Kevin
    I'm new to boost, and thought I'd try it out with some realistic deployment scenarios for the .dlls, so I used the following command to compile/install the libraries: .\bjam install --layout=system variant=debug runtime-link=shared link=shared --with-date_time --with-thread --with-regex --with-filesystem --includedir=<my include directory> --libdir=<my bin directory> > installlog.txt That seemed to work, but my simple program (taken right from the "Getting Started" page) fails: #include <boost/regex.hpp> #include <iostream> #include <string> // Place your functions after this line int main() { std::string line; boost::regex pat( "^Subject: (Re: |Aw: )*(.*)" ); while (std::cin) { std::getline(std::cin, line); boost::smatch matches; if (boost::regex_match(line, matches, pat)) std::cout << matches[2] << std::endl; } } This fails with the following linker error: fatal error LNK1104: cannot open file 'libboost_regex-vc80-mt-1_42.lib' I'm sure that both the .lib and the .dlls are in that directory, and named how I want them to be (ie: boost_regex.lib, etc, all unversioned, as the --layout=system says). So why is it looking for the versioned type of it? And how do I get it to look for the unversioned type of the library? I've tried this with more "normal" options, such as below: .\bjam stage --build-type=complete --with-date_time --with-thread --with-filesystem --with-regex > mybuildlog.txt And that works fine. I made sure my compiler saw the "stage\lib" directory, and it compiled and ran fine with nothing beyond having the environment looking into the right lib directory. But when I took those "testing" directories away, and wanted to use these others (unversioned), then it failed. I'm under VS2005 here on XP. Any ideas?

    Read the article

  • File descriptor limits and default stack sizes

    - by Charles
    Where I work we build and distribute a library and a couple complex programs built on that library. All code is written in C and is available on most 'standard' systems like Windows, Linux, Aix, Solaris, Darwin. I started in the QA department and while running tests recently I have been reminded several times that I need to remember to set the file descriptor limits and default stack sizes higher or bad things will happen. This is particularly the case with Solaris and now Darwin. Now this is very strange to me because I am a believer in 0 required environment fiddling to make a product work. So I am wondering if there are times where this sort of requirement is a necessary evil, or if we are doing something wrong. Edit: Great comments that describe the problem and a little background. However I do not believe I worded the question well enough. Currently, we require customers, and hence, us the testers, to set these limits before running our code. We do not do this programatically. And this is not a situation where they MIGHT run out, under normal load our programs WILL run out and seg fault. So rewording the question, is requiring the customer to change these ulimit values to run our software to be expected on some platforms, ie, Solaris, Aix, or are we as a company making it to difficult for these users to get going? Bounty: I added a bounty to hopefully get a little more information on what other companies are doing to manage these limits. Can you set these pragmatically? Should we? Should our programs even be hitting these limits or could this be a sign that things might be a bit messy under the covers? That is really what I want to know, as a perfectionist a seemingly dirty program really bugs me.

    Read the article

  • How can I optimize the SELECT statement running on an Oracle database?

    - by Elvis Lou
    I have a SELECT statement in ORACLE: SELECT COUNT(DISTINCT ds1.endpoint_msisdn) multiple30, dss1.service, dss1.endpoint_provisioning_id, dss1.company_scope, Nvl(x.subscription_status, dss1.subscription_status) subscription_status FROM daily_summary ds1 join daily_summary ds2 ON ds1.endpoint_msisdn = ds2.endpoint_msisdn, daily_summary_static dss1, daily_summary_static dss2, (SELECT NULL subscription_status FROM dual UNION ALL SELECT -2 subscription_status FROM dual) x WHERE ds1.summary_ts >= To_date('10-04-2012', 'dd-mm-yyyy') - 30 AND ds1.summary_ts <= To_date('10-04-2012', 'dd-mm-yyyy') AND dss1.last_active >= To_date('10-04-2012', 'dd-mm-yyyy') - 30 AND dss1.last_active <= To_date('10-04-2012', 'dd-mm-yyyy') AND dss2.last_active >= To_date('10-04-2012', 'dd-mm-yyyy') - 30 AND dss2.last_active <= To_date('10-04-2012', 'dd-mm-yyyy') AND dss1.service <> dss2.service AND ( dss1.company_scope = 2 OR dss1.company_scope = 5 ) AND ( dss2.company_scope = 2 OR dss2.company_scope = 5 ) AND dss1.company_scope = dss2.company_scope AND ds1.endpoint_noc_id = dss1.endpoint_noc_id AND ds1.endpoint_host_id = dss1.endpoint_host_id AND ds1.endpoint_instance_id = dss1.endpoint_instance_id AND ds2.endpoint_noc_id = dss2.endpoint_noc_id AND ds2.endpoint_host_id = dss2.endpoint_host_id AND ds2.endpoint_instance_id = dss2.endpoint_instance_id AND dss1.endpoint_provisioning_id = dss2.endpoint_provisioning_id AND Least(1, ds1.total_actions) = 1 AND Least(1, ds2.total_actions) = 1 GROUP BY dss1.service, dss1.endpoint_provisioning_id, dss1.company_scope, Nvl(x.subscription_status, dss1.subscription_status); This query took about 26 minutes to return in my environment, but if I remove the section: dss1.last_active >= to_date('10-04-2012','dd-mm-yyyy') - 30 AND dss1.last_active <= to_date('10-04-2012','dd-mm-yyyy') AND dss2.last_active >= to_date('10-04-2012','dd-mm-yyyy') - 30 AND dss2.last_active <= to_date('10-04-2012','dd-mm-yyyy') AND it only took 20 seconds to run. We have index on the column last_active, I don't know why the section slow down the performance so much? any ideas?

    Read the article

  • Solaris MySQL Unable to start successfully

    - by Iscariot
    Environment: Solaris 10 This mysql server has been up and running for 6 months now. Today all of a sudden it crashed. When typing 'mysql' as user it gives the error MYSQL" Error 2002 (HY000): Can't Connect to Local MySQL server though socket '/tmp/mysql.sock' when typing mysql as root it says mysql: not found. The server try to open mysql, it stays open for 9-10 seconds and restarts the process. Below are the application logs. Application-database-mysql_mysql-csk.log [ May 30 22:37:52 Enabled. ] [ May 30 22:37:58 Rereading configuration. ] [ May 30 22:37:59 Executing start method ("/opt/coolstack/lib/svc/method/svc-cskmysql start") ] /opt/coolstack/mysql/bin/mysqld_safe --user=mysql --datadir=/dbpool1/data --pid-file=/dbpool1/data/database.soliaonline.com.pid [ May 30 22:37:59 Method "start" exited with status 0 ] [ May 30 22:38:13 Stopping because all processes in service exited. ] [ May 30 22:38:13 Executing stop method ("/opt/coolstack/lib/svc/method/svc-cskmysql stop") ] [ May 30 22:38:13 Method "stop" exited with status 0 ] [ May 30 22:38:13 Executing start method ("/opt/coolstack/lib/svc/method/svc-cskmysql start") ] /opt/coolstack/mysql/bin/mysqld_safe --user=mysql --datadir=/dbpool1/data --pid-file=/dbpool1/data/database.soliaonline.com.pid [ May 30 22:38:13 Method "start" exited with status 0 ] [ May 30 22:38:25 Stopping because all processes in service exited. ] [ May 30 22:38:25 Executing stop method ("/opt/coolstack/lib/svc/method/svc-cskmysql stop") ] [ May 30 22:38:25 Method "stop" exited with status 0 ] I am hoping someone might have run into this before and might know how to fix it.

    Read the article

  • Unity in C# for Platform Specific Implementations

    - by DxCK
    My program has heavy interaction with the operating system through Win32API functions. now i want to migrate my program to run under Mono under Linux (No wine), and this requires different implementations to the interaction with the operating system. I started designing a code that can have different implementation for difference platform and is extensible for new future platforms. public interface ISomeInterface { void SomePlatformSpecificOperation(); } [PlatformSpecific(PlatformID.Unix)] public class SomeImplementation : ISomeInterface { #region ISomeInterface Members public void SomePlatformSpecificOperation() { Console.WriteLine("From SomeImplementation"); } #endregion } public class PlatformSpecificAttribute : Attribute { private PlatformID _platform; public PlatformSpecificAttribute(PlatformID platform) { _platform = platform; } public PlatformID Platform { get { return _platform; } } } public static class PlatformSpecificUtils { public static IEnumerable<Type> GetImplementationTypes<T>() { foreach (Assembly assembly in AppDomain.CurrentDomain.GetAssemblies()) { foreach (Type type in assembly.GetTypes()) { if (typeof(T).IsAssignableFrom(type) && type != typeof(T) && IsPlatformMatch(type)) { yield return type; } } } } private static bool IsPlatformMatch(Type type) { return GetPlatforms(type).Any(platform => platform == Environment.OSVersion.Platform); } private static IEnumerable<PlatformID> GetPlatforms(Type type) { return type.GetCustomAttributes(typeof(PlatformSpecificAttribute), false) .Select(obj => ((PlatformSpecificAttribute)obj).Platform); } } class Program { static void Main(string[] args) { Type first = PlatformSpecificUtils.GetImplementationTypes<ISomeInterface>().FirstOrDefault(); } } I see two problems with this design: I can't force the implementations of ISomeInterface to have a PlatformSpecificAttribute. Multiple implementations can be marked with the same PlatformID, and i dont know witch to use in the Main. Using the first one is ummm ugly. How to solve those problems? Can you suggest another design?

    Read the article

  • should i advocate migrating from access to (my)sql

    - by HotOil
    Hi: We have a windows MFC app that is written against an access database on a company server. The db is not that big: 19 MB. There are at most 2-3 users accessing it at any one time. It is used in a factory environment where access speed (or lack thereof) over the intranet becomes noticeable as it is part of the manufacturing time for our widgets. The scenario is this: as each widget is completed, it gets a record in the db.. by the end of the year, the db is larger and searching for a record takes longer and longer. The solution so far has been to manually move older records to an archival table about once a year. We are reworking other portions of this app right now, and it would be a good time to move to another db if we are going to do it. It is my understanding that if we were using sql, the search time would not go up as the table gets bigger because the entire .mdb does not have to be sent over the network each time. Is this correct? Does anyone have any insight about whether it could be worth it to go to the trouble (time and money) of migrating to a new db, or should I just add more functionality to the application we have now, and maybe automatically purge the older records from time to time, and add additional facilities to the app to get at the older records when needed? Thanks for any wisdom you can share..

    Read the article

  • Why doesn't my test run tearDownAfterTestClass when it fails

    - by Memor-X
    in a test i am writing the setUpBeforeTests creates a new customer in the database who is then used to perform the tests with so naturally when i finish the test i should get rid of this test customer in tearDownAfterTestClass so that i can create then anew when i rerun the test and not get any false positives how when the tests all run fine i have no problem but if a test fails and i go to rerun it my setUpBeforeTests fails because i check for mysql errors in it like this try { if(!mysqli_query($connection,$query)) { $this->assertTrue(false); } } catch (Exception $exc) { $msg = '[tearDownAfterTestClass] Exception Error' . PHP_EOL . PHP_EOL; $msg .= 'Could not run query - '.mysqli_error($connection). PHP_EOL;; $this->fail($msg); } the error i get is that there is a primary key violation which is expected cause i'm trying to create a new customer using the same data (primary key is on email which is also used to log in) however that means when the test failed it didn't run tearDownAfterTestClass now i could just move everything in tearDownAfterTestClass to the start of setUpBeforeTests however to me that seems like bad programming since it defeates the purpose of even have tearDownAfterTestClass so i am wondering, why isn't my tearDownAfterTestClass running when a test fails NOTE: the database is a fundamental part of the system i'm testing and the database and system are on a separate development environment not the live one, the backup files for the database are almost 2 GBs and takes almost 1/2 an hour to restore, the purpose of the tear down is to remove any data we have added because of the test so that we don't have to restore the database every time we run the tests

    Read the article

  • Using twig variable to dynamically call an imported macro sub-function

    - by Chausser
    I am attempting if use a variable to call a specific macro name. I have a macros file that is being imported {% import 'form-elements.html.twig' as forms %} Now in that file there are all the form element macros: text, textarea, select, radio etc. I have an array variable that gets passed in that has an elements in it: $elements = array( array( 'type'=>'text, 'value'=>'some value', 'atts'=>null, ), array( 'type'=>'text, 'value'=>'some other value', 'atts'=>null, ), ); {{ elements }} what im trying to do is generate those elements from the macros. they work just fine when called by name: {{ forms.text(element.0.name,element.0.value,element.0.atts) }} However what i want to do is something like this: {% for element in elements %} {{ forms[element.type](element.name,element.value,element.atts) }} {% endfor %} I have tried the following all resulting in the same error: {{ forms["'"..element.type.."'"](element.name,element.value,element.atts) }} {{ forms.(element.type)(element.name,element.value,element.atts) }} {{ forms.{element.type}(element.name,element.value,element.atts) }} This unfortunately throws the following error: Fatal error: Uncaught exception 'LogicException' with message 'Attribute "value" does not exist for Node "Twig_Node_Expression_GetAttr".' in Twig\Environment.php on line 541 Any help or advice on a solution or a better schema to use would be very helpful.

    Read the article

  • Rails runner command not saving to cache

    - by mark
    Hi I'm having a bit of a problem with a cron task generated by rails whenever plugin that should store remote data in the rails cache for display. What I have is this: schedule.rb set :path, '/var/www/apps/tuexplore/current' every 1.hour do runner "Weather.cache_remote", :environment => :production end calls this model class Weather def self.cache_remote Rails.cache.write('weather_data', Net::HTTP.get_response(URI.parse(WEATHER_URL)).body) end end Calling whenever returns this PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/deploy/.gem/ruby/1.8/bin 0 * * * * /var/www/apps/tuexplore/current/script/runner -e production "Weather.cache_remote" This doesn't work. Calling the weather model method from a controller works fine, but I need to schedule it hourly. The cron task causes a "Cache write: weather_data" entry to appear in the production log but data isn't stored nor output into the page. Additional information, I can log into production console and run Weather.cache_remote, then read the data from the rails cache. I'd be really appreciative if someone could point out the error of my ways. If further explanation is needed please ask. Thanks in advance for any pointers.

    Read the article

  • Global hotkey capture in VB.net

    - by ggonsalv
    I want to have my app which is minimized to capture data selected in another app's window when the hot key is pressed. My app definitely doesn't have the focus. Additionally when the hot key is pressed I want to present a fading popup (Outlook style) so my app never gets focus. At a minimum I want to capture the Window name, Process ID and the selected data. The app which has focus is not my application? I know one option is to sniff the Clipboard, but are there any other solutions. This is to audit the rate of data-entry in to another system of which I have no control. It is a mainframe emulation client program(attachmate). The plan is complete data entry in Application X. Select a certain section of the screen in App X which is proof of data entry (transaction ID). Press the Magic Hotkey, which then 'sends' the selection to my App. From System.environment or system.Threading I can find the Windows logon. Similiarly I can also capture the time. All the data will be logged to SQL. Once Complete show Outlook style pop up saying the data entry has been logged. Any thoughts.

    Read the article

  • Create new or update existing entity at one go with JPA

    - by Alex R
    A have a JPA entity that has timestamp field and is distinguished by a complex identifier field. What I need is to update timestamp in an entity that has already been stored, otherwise create and store new entity with the current timestamp. As it turns out the task is not as simple as it seems from the first sight. The problem is that in concurrent environment I get nasty "Unique index or primary key violation" exception. Here's my code: // Load existing entity, if any. Entity e = entityManager.find(Entity.class, id); if (e == null) { // Could not find entity with the specified id in the database, so create new one. e = entityManager.merge(new Entity(id)); } // Set current time... e.setTimestamp(new Date()); // ...and finally save entity. entityManager.flush(); Please note that in this example entity identifier is not generated on insert, it is known in advance. When two or more of threads run this block of code in parallel, they may simultaneously get null from entityManager.find(Entity.class, id) method call, so they will attempt to save two or more entities at the same time, with the same identifier resulting in error. I think that there are few solutions to the problem. Sure I could synchronize this code block with a global lock to prevent concurrent access to the database, but would it be the most efficient way? Some databases support very handy MERGE statement that updates existing or creates new row if none exists. But I doubt that OpenJPA (JPA implementation of my choice) supports it. Event if JPA does not support SQL MERGE, I can always fall back to plain old JDBC and do whatever I want with the database. But I don't want to leave comfortable API and mess with hairy JDBC+SQL combination. There is a magic trick to fix it using standard JPA API only, but I don't know it yet. Please help.

    Read the article

  • Call ASP.NET 2.0 Server side code from Javascript

    - by Kannabiran
    I'm struggling with this for the past 3 days. I need to call asp.net serverside code from Javascript when the user closes the browser. I'm using the following code to accomplish this. In my asp.net form I have various validation controls. Even if there are some validation errors, When I close the form the server side code works perfectly in my development box(windows 7). But the same code doesnt work in my production environment(windows server). Does it have something to do with the Validation summary or Validation controls. The button control has Causes validation set to false. So even if there is a validation error still my form will post back. Am I correct? I suspect the form is not getting post back to the server when there is a validation error. But i'm disabling all the validation controls in the javascript before calling the button click event. Can someone throw some light on this issue. There are few blogs which suggests to use JQUERY, AJAX (Pagemethods and script manager). function ConfirmClose(e) { var evtobj = window.event ? event : e; if (evtobj == e) { //firefox if (!evtobj.clientY) { evtobj.returnValue = message; } } else { //IE if (evtobj.clientY < 0) { DisablePageValidators(); document.getElementById('<%# buttonBrowserCloseClick.ClientID %>').click(); } } } function DisablePageValidators() { if ((typeof (Page_Validators) != "undefined") && (Page_Validators != null)) { var i; for (i = 0; i < Page_Validators.length; i++) { ValidatorEnable(Page_Validators[i], false); } } } //HTML <div style="display:none" > <asp:Button ID="buttonBrowserCloseClick" runat="server" onclick="buttonBrowserCloseClick_Click" Text="Button" Width="141px" CausesValidation="False" /> //Server Code protected void buttonBrowserCloseClick_Click(object sender, EventArgs e) { //Some C# code goes here }

    Read the article

  • HTC Incredible displaying blank ImageView

    - by Todd
    HI, I have an app that displays an an image in an ImageView using the setImageDrawable(Drawable) method. However, with the release of the Droid Incredible the images are coming up as a blank screen. I am using Drawable.createFromPath(Environment.getExternalStorageDirectory() + "\imagefile") to access the image from the SD card. I don't get any sort of error, just a black screen. I will get a null pointer exception if after trying to load the image I try to access a property of the Drawable. This makes me believe that the Drawable wasn't loaded, but I don't know why or how to make it work. This code as been working on all other Android devices, so I'm not sure what is different with the Incredible. Unfortunately I don't have access to an Incredible to test on, so I've got to rely on others to test and send me the log files. Any help you can offer would be greatly appreciated. If anyone knows how to replicate this issue on the emulator, that would be helpful too. I've configured an emulator with firmware 7 and the correct screen resolution, but I was unable to replicate the issue. Thanks.

    Read the article

  • setting cookies

    - by aharon
    Okay, so I'm trying to set cookies using Ruby. I'm in a Rack environment. response[name]=value will add an HTTP header into the HTTP headers hash rack has. I know that it works. The following method doesn't work: def set_cookie(opts={}) args = { :name => nil, :value => nil, :expires => Time.now+314, :path => '/', :domain => Cambium.uri #contains the IP address of the dev server this is running on }.merge(opts) raise ArgumentError, ":name and :value are mandatory" if args[:name].nil? or args[:value].nil? response['Set-Cookie']="#{args[:name]}=#{args[:value]}; expires=#{args[:expires].clone.gmtime.strftime("%a, %d-%b-%Y %H:%M:%S GMT")}; path=#{args[:path]}; domain=#{args[:domain]}" end Why not? And how can I solve it? Thanks.

    Read the article

  • Nhibernate beginner - asking for directions

    - by George
    Hello guys. I'm starting off with NHibernate now and I still don't have a testable environment. I would like to know from you, experienced fellows if there is a problem to map IList to an Set in .hbm file. Like this: //c# IList<TrechoItem> trechos_item; <!-- xml .hbm --> <set name="TrechosItem" table="trecho_item" lazy="true" inverse="true" fetch="select"> <key column="id_item"/> <one-to-many class="TrechoItem"/> </set> Or, in this: IList<Autor> Autores; <set name="Autores" lazy="true" table="item_possui_autor"> <key column="id_item"/> <many-to-many class="Autor" column="id_autor"/> </set> Is this possible? Or am I doing the wrong thing? I tried using and but these did not gave me all the options in . Thanks in advanced

    Read the article

  • Web development scheme for staging and production servers using Git Push

    - by ServAce85
    I am using git to manage a dynamic website (PHP + MySQL) and I want to send my files from my localhost to my staging and development servers in the most efficient and hassle-free way. I am currently convinced that the best way for me to approach this problem is to use this git branching model to organize my local git repo. From there, I will use the release branches to push to my staging server for testing. Once I am happy that the release code works on the staging server, I can then merge with my master branch and push that to my production server. Pushing to Staging Server: As noted in many introductory git posts, I could run into problems pushing into a non-bare repo, so, as suggested in this response, I plan to push the release branch to a bare repo on the server and have a post-receive hook that clones the bare repo to a non-bare repo that also acts as the web-hosted directory. Pushing to Production Server: Here's my newest source of confusion... In the response that I cited above, it made me curious as to why @Paul states that it's a completely different story when pushing to a live, development server. I guess I don't see the problem. Would it be safe and hassle-free to follow the same steps as above, but for the master branch? Where are the potential pit-falls? Config Files: With respect to configuration files that are unique to each environment (.htaccess, config.php, etc), it seems simplest to .gitignore each of those files in their respective repos on their respective servers. Can you see anything immediately wrong with this? Better solutions? Accessing Data: Finally, as I initially stated, the site uses MySQL databases to store data. How would you suggest I access that data (for testing purposes) from the staging server and localhost? I realize that I may have asked way too many questions for a single post, but since they're all related to the best way to set up this development scheme, I thought it was necessary.

    Read the article

  • Bash script to insert code from one file at a specific location in another file?

    - by Kurtosis
    I have a fileA with a snippet of code, and I need a script to insert that snippet into fileB on the line after a specific pattern. I'm trying to make the accepted answer in this thread work, but it's not, and is not giving an error so not sure why not: sed -e '/pattern/r text2insert' filewithpattern Any suggestions? pattern (insert snippet on line after): def boot { also tried escaped pattern but no luck: def\ boot\ { def\ boot\ \{ fileA snippet: LiftRules.htmlProperties.default.set((r: Req) => new Html5Properties(r.userAgent)) fileB (Boot.scala): package bootstrap.liftweb import net.liftweb._ import util._ import Helpers._ import common._ import http._ import sitemap._ import Loc._ /** * A class that's instantiated early and run. It allows the application * to modify lift's environment */ class Boot { def boot { // where to search snippet LiftRules.addToPackages("code") // Build SiteMap val entries = List( Menu.i("Home") / "index", // the simple way to declare a menu // more complex because this menu allows anything in the // /static path to be visible Menu(Loc("Static", Link(List("static"), true, "/static/index"), "Static Content"))) // set the sitemap. Note if you don't want access control for // each page, just comment this line out. LiftRules.setSiteMap(SiteMap(entries:_*)) // Use jQuery 1.4 LiftRules.jsArtifacts = net.liftweb.http.js.jquery.JQuery14Artifacts //Show the spinny image when an Ajax call starts LiftRules.ajaxStart = Full(() => LiftRules.jsArtifacts.show("ajax-loader").cmd) // Make the spinny image go away when it ends LiftRules.ajaxEnd = Full(() => LiftRules.jsArtifacts.hide("ajax-loader").cmd) // Force the request to be UTF-8 LiftRules.early.append(_.setCharacterEncoding("UTF-8")) } }

    Read the article

  • Is there some formal way to update the browser detection files for ASP.Net?

    - by Deane
    I have an ASP.Net site on which we're using control adapters. We have the adapters mapped to a "refID" of "Default." These adapters are working fine on all browsers except Chrome and Safari. For those browsers, they do not execute. I've given up trying to figure out why -- I have a question here on SO that no one has been able to answer, and I've been researching it for days now. It's just inexplicable. I have tested the same code in my local environment, and it works just fine. Additionally, no one else can replicate my problem on other servers. It seems to be somehow confined to the machines at my client's site. Could they be somehow out-of-date? If this is the case, is there some way to "update" the .browser files? I'm half-tempted to just copy the .browser files out of the Framework directory from my machine over to theirs, but I'm curious is there's something more formal than this? Is there some other source of data that ASP.Net uses for browser detection other than these files?

    Read the article

  • CONSOLE appender was turned off. But JBoss still writes some traces to STDOUT

    - by Vladimir Bezugliy
    CONSOLE appender was turned off. But JBoss still writes some traces to STDOUT during startup. Why? C:\as\jboss-5.1.0.GA\bin>start.cmd Calling C:\as\jboss-5.1.0.GA\bin\run.conf.bat =============================================================================== JBoss Bootstrap Environment JBOSS_HOME: C:\as\jboss-5.1.0.GA JAVA: C:\as\jdk1.6.0_07\bin\java JAVA_OPTS: -Dprogram.name=run.bat -Xms512M -Xmx758M -XX:MaxPermSize=256M -server CLASSPATH: C:\as\jboss-5.1.0.GA\bin\run.jar =============================================================================== 13:11:25,080 INFO [ServerImpl] Starting JBoss (Microcontainer)... 13:11:25,080 INFO [ServerImpl] Release ID: JBoss [The Oracle] 5.1.0.GA (build: SVNTag=JBoss_5_1_0_GA date=200905221053) 13:11:25,080 INFO [ServerImpl] Bootstrap URL: null 13:11:25,080 INFO [ServerImpl] Home Dir: C:\as\jboss-5.1.0.GA 13:11:25,080 INFO [ServerImpl] Home URL: file:/C:/as/jboss-5.1.0.GA/ 13:11:25,080 INFO [ServerImpl] Library URL: file:/C:/as/jboss-5.1.0.GA/lib/ 13:11:25,080 INFO [ServerImpl] Patch URL: null 13:11:25,080 INFO [ServerImpl] Common Base URL: file:/C:/as/jboss-5.1.0.GA/common/ 13:11:25,080 INFO [ServerImpl] Common Library URL: file:/C:/as/jboss-5.1.0.GA/common/lib/ 13:11:25,080 INFO [ServerImpl] Server Name: web 13:11:25,080 INFO [ServerImpl] Server Base Dir: C:\as\jboss-5.1.0.GA\server 13:11:25,080 INFO [ServerImpl] Server Base URL: file:/C:/as/jboss-5.1.0.GA/server/ 13:11:25,080 INFO [ServerImpl] Server Config URL: file:/C:/as/jboss-5.1.0.GA/server/web/conf/ 13:11:25,080 INFO [ServerImpl] Server Home Dir: C:\as\jboss-5.1.0.GA\server\web 13:11:25,080 INFO [ServerImpl] Server Home URL: file:/C:/as/jboss-5.1.0.GA/server/web/ 13:11:25,080 INFO [ServerImpl] Server Data Dir: C:\as\jboss-5.1.0.GA\server\web\data ...

    Read the article

  • Starting Beyond Compare from the Command Line

    - by Logan
    I have Beyond Compare 3 installed at; "C:\Program Files\Beyond Compare 3\BCompare.exe" and Cygwin; "C:\Cygwin\bin\bash.exe" What I would like is to be able to use a command such as; diff <file1> <file2> into the Cygwin shell and to have the shell fork a process opening the two files in beyond compare. I looked at the Beyond Compare Support Page but I'm afraid It was too brief for me. I tried copying the text verbatim (apart from path to executable) to no avail; Instead of using a batch file, create a file named "bc.sh" with the following line: "$(cygpath 'C:\Progra~1\Beyond~1\bcomp.exe')" `cygpath -w "$6"` `cygpath -w "$7"` /title1="$3" /title2="$5" /readonly Was I supposed to replace cygpath? I get a 'Command not found' error when I enter the name of the script on the command line. gavina@whwgavina1 /cygdrive $ "C:\Documents and Settings\gavina\Desktop\bc.sh" bash: C:\Documents and Settings\gavina\Desktop\bc.sh: command not found Does anyone have Beyond Compare working as I have described? Is this even possible in a Windows environment? Thanks in advance!

    Read the article

  • Can't get rails app to start on heroku

    - by jonnii
    I'm trying to deploy a rails app to heroku, but keep getting the following error. I'd have thought that managing the postgres gems would be something heroku would handle. I've tried everything I can think of short of installing postgres on my local machine, which I'd need to do if I wanted to install the postgres gem. There's also no gem called activerecord-postgresql-adapter... I'm guessing this is the standard adapter that comes with rails?? Any thoughts on how to fix this? App failed to start /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:76:in `establish_connection': Please install the postgresql adapter: `gem install activerecord-postgresql-adapter` (no such file to load -- pg) (RuntimeError) from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:60:in `establish_connection' from /usr/local/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:55:in `establish_connection' from /usr/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:438:in `initialize_database' from /usr/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:141:in `process' from /usr/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in `send' from /usr/local/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/initializer.rb:113:in `run' from /disk1/home/slugs/135415_c7f31f0_9f1f/mnt/config/environment.rb:9 from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' ... 14 levels... from /usr/local/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/builder.rb:29:in `instance_eval' from /usr/local/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/builder.rb:29:in `initialize' from /home/heroku_rack/heroku.ru:1:in `new' from /home/heroku_rack/heroku.ru:1

    Read the article

  • How to patch an S4 method in an R package?

    - by Richie Cotton
    If you find a bug in a package, it's usually possible to patch the problem with fixInNamespace, e.g. fixInNamespace("mean.default", "base"). For S4 methods, I'm not sure how to do it though. The method I'm looking at is in the gWidgetstcltk package. You can see the source code with getMethod(".svalue", c("gTabletcltk", "guiWidgetsToolkittcltk")) I can't find the methods with fixInNamespace. fixInNamespace(".svalue", "gWidgetstcltk") Error in get(subx, envir = ns, inherits = FALSE) : object '.svalue' not found I thought setMethod might do the trick, but setMethod(".svalue", c("gTabletcltk", "guiWidgetsToolkittcltk"), definition = function (obj, toolkit, index = NULL, drop = NULL, ...) { widget = getWidget(obj) sel <- unlist(strsplit(tclvalue(tcl(widget, "selection")), " ")) if (length(sel) == 0) { return(NA) } theChildren <- .allChildren(widget) indices <- sapply(sel, function(i) match(i, theChildren)) inds <- which(visible(obj))[indices] if (!is.null(index) && index == TRUE) { return(inds) } if (missing(drop) || is.null(drop)) drop = TRUE chosencol <- tag(obj, "chosencol") if (drop) return(obj[inds, chosencol, drop = drop]) else return(obj[inds, ]) }, where = "package:gWidgetstcltk" ) Error in setMethod(".svalue", c("gTabletcltk", "guiWidgetsToolkittcltk"), : the environment "gWidgetstcltk" is locked; cannot assign methods for function ".svalue" Any ideas?

    Read the article

< Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >