Search Results

Search found 69968 results on 2799 pages for 'real time updates'.

Page 307/2799 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • What is the risk of introducing non standard image machines to a corporate environment

    - by Troy Hunt
    I’m after some feedback from those in the managed desktop or network security space on the risks of introducing machines that are not built on a standard desktop image into a large corporate environment. This particular context relates to the standard corporate image (32 bit Win XP) in a large multi-national not being suitable for a particular segment of users. In short, I’m looking at what hurdles we might come across by proposing the introduction of machines which are built and maintained by a handful of software developers and not based on the corporate desktop image (proposing 64 bit Win 7). I suspect the barriers are primarily around virus definition updates, the rollout of service packs and patches and the compatibility of existing applications with the newer OS. In terms of viruses and software updates, if machines were using common virus protection software with automated updates and using Windows Update for service packs and patches, is there still a viable risk to the corporate environment? For that matter, are large corporate environments normally vulnerable to the introduction of a machine not based on a standard image? I’m trying to get my head around how real the risk of infection and other adverse events are from machines being plugged into the network. There are multiple scenarios outside of just the example above where this might happen (i.e. a vendor plugging in a machine for internet access during a presentation). Would a large corporate network normally be sufficiently hardened against such innocuous activity? I appreciate the theory as to why policies such as standard desktop images exist, I’m just interested in the actual, practical risk and how much a network should be protected by means other than what is managed on individual PCs.

    Read the article

  • 5 year old server upgrade

    - by rizzo0917
    I am looking to upgrade a server for a web app. Currently the application is running very sluggish. We've made some adjustments to mysql (that's another issue in itself) and made some adjustments so that heaviest quires get run on a copy of the database on another server was have as a backup, however this will not last that much longer and we are looking to upgrade. Currently the servers CPUs are (4) Intel(R) XEON(TM) CPU 2.00GHz, with 1 gig of ram. The database is 442.5 MiB, with about 1,743,808 records. There are two parts of the program, the one, side a, inserts and updates most of the data. Side b, reads the data and does some minor updates. Currently our biggest day for side a are 800 users (of 40,000 users all year) imputing the system. And our Side b is currently unknown, however we have a total of 1000 clients. The system is most likely going to cap out at 5000 side b clients, with about a year 300,000 side a users. The current database is 5 years old, so we can most likely expect the database to grow pretty rapidly, possibly double each year (which we can most likely archive older records if it comes to that). So with that being said, should we get a server for each side of the app, side a being the master, side b being the slave, any updates made on side b are router to side a. So the question is should i get 2 of these or 1. 2 x Intel Nehalem Xeon E5520 2.26Ghz (8 Cores) 12GB DDRIII Memory 500GB SATAII HDD 100Mbps Port Speed And Naturally I would need to have a redundant backup so it could potentially be 4 of them.

    Read the article

  • How to ensure local file is up-to-date or ahead (dropbox sync) before truecrypt auto-mount it?

    - by user620965
    There are a lot tutorials out there that states that dropbox build-in encryption is not secure enought. That tutorials recommands to sync a truecrypt container file to have all files in it securely encrypted. This setup is know to be limited. You can NOT have that truecrypt container file mounted on the same time on more than one location - if you have inserted changes to the contents of the container in more then one location at a time then this setup produces a conflict on the container file in the dropbox system - resulting in one container file for each location. In my case that issue is not relevant - i do not use my data on more than one location at a time. I want to use the auto-mount feature of truecrypt on startup of windows 7 to have a zero configuration environment - and start working right away. But i want to ensure that the local truecrypt container file is up-to-date before truecrypt mounts it automatically - imagine you updated the contents of the container on your primary location and your secondary location was off for a long time. In that case it can take "a long time" till dropbox sync is complete (e.g. depending on your internet connection and the size of the container file). There is a option in truecrypt that ensures that truecrypt do not update the timestamp of the container file - which speeds up the sync, because dropbox client is doing a differential sync then instead of a time consuming full-sync. That is an improvement to that setup, but this do not fix my issue. The question is how to make the auto-mount function wait for the container file to be up-to-date (updated by dropbox)? In contrast: if the file was changed local, but remote file (in the dropbox cloud system) is still old (not jet updated by the sync process / or process is progress), should not make truecrypt to wait for the sync. Suggestions?

    Read the article

  • sudo apt-get install python.pip python-dev Gives Error

    - by user2539745
    I am learning Django from http://gettingstartedwithdjango.com/ and I have windows 7 32-bit. The tutorial asked to install virtualbox and vagrant(tutorial had precise64 and it had issues in my pc so I installed precise32) so I did it. Now the tutorial asked to do sudo apt-get install python-dev python.pip so I did it but it gave me this error > vagrant@precise32:~$ sudo apt-get install python.pip python-dev Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'python-pip' for regex 'python.pip' Note, selecting 'python-pipeline' for regex 'python.pip' The following extra packages will be installed: libexpat1 libexpat1-dev libpython2.7 python-pkg-resources python-setuptools python-support python2.7 python2.7-dev python2.7-minimal Suggested packages: python-distribute python-distribute-doc python2.7-doc binfmt-support The following NEW packages will be installed: libexpat1-dev libpython2.7 python-dev python-pip python-pipeline python-pkg-resources python-setuptools python-support python2.7-dev The following packages will be upgraded: libexpat1 python2.7 python2.7-minimal 3 upgraded, 9 newly installed, 0 to remove and 63 not upgraded. Need to get 34.7 MB/35.7 MB of archives. After this operation, 42.0 MB of additional disk space will be used. Do you want to continue [Y/n]? y Err (http removed)us.archive.ubuntu.com/ubuntu/ precise-updates/main python2.7 i386 2.7 .3-0ubuntu3.1 404 Not Found [IP: 91.189.91.15 80] Err (http removed)us.archive.ubuntu.com/ubuntu/ precise-updates/main python2.7-minimal i386 2.7.3-0ubuntu3.1 404 Not Found [IP: 91.189.91.15 80] Err (http removed)us.archive.ubuntu.com/ubuntu/ precise-updates/main libpython2.7 i386 2.7.3-0ubuntu3.1 404 Not Found [IP: 91.189.91.15 80] Err (http removed)us.archive.ubuntu.com/ubuntu/ precise-updates/main python2.7-dev i386 2.7.3-0ubuntu3.1 404 Not Found [IP: 91.189.91.15 80] Failed to fetch (http removed)us.archive.ubuntu.com/ubuntu/pool/main/p/python2.7/python 2.7_2.7.3-0ubuntu3.1_i386.deb 404 Not Found [IP: 91.189.91.15 80] Failed to fetch (http removed)us.archive.ubuntu.com/ubuntu/pool/main/p/python2.7/python 2.7-minimal_2.7.3-0ubuntu3.1_i386.deb 404 Not Found [IP: 91.189.91.15 80] Failed to fetch (http removed)us.archive.ubuntu.com/ubuntu/pool/main/p/python2.7/libpyt hon2.7_2.7.3-0ubuntu3.1_i386.deb 404 Not Found [IP: 91.189.91.15 80] Failed to fetch (http removed)us.archive.ubuntu.com/ubuntu/pool/main/p/python2.7/python 2.7-dev_2.7.3-0ubuntu3.1_i386.deb 404 Not Found [IP: 91.189.91.15 80] E: Unable to fetch some archives, maybe run apt-get update or try with --fix-mis sing? Please help what should I do ??

    Read the article

  • django views getid

    - by Hulk
    class host(models.Model): emp = models.ForeignKey(getname) def __unicode__(self): return self.topic In views there is the code as, real =[] for emp in my_emp: real.append(host.objects.filter(emp=emp.id)) This above results only the values of emp,My question is that how to get the ids along with emp values. Thanks..

    Read the article

  • Java Anagram Solver

    - by Alex
    I can work out how to create anagrams of a string but I don't know how I can compare them to a dictionary of real words to check if the anagram is a real word. Is there a class in the Java API that contains the entire English dictionary?

    Read the article

  • VTD-XML Parsing Performance (speed critical factor). Requesting Feedback/Comments

    - by andreas
    Hello, I am about to use VTD-XML (found at http://vtd-xml.sourceforge.net/) but I am interested in getting real-case usage feedback, by any one that has used the library and has any comments. At the URL (http://vtd-xml.sourceforge.net/) there are benchmarks but if someone has used VTD-XML and has comments FOR it I would like to hear them. Speed is a critical factor in the application and comments after real-case usage, by developers, is what i am looking for. Regards,

    Read the article

  • Creating a custom ubuntu ISO

    - by ajstack
    Hi, I want to create a custom ubuntu ISO (This ISO will contain all the packages with the latest updates released till date). Something along the lines of Take the pristine ubuntu ISO Download the updates from some ubuntu update repositories Re-create the ISO? How should I go about this?

    Read the article

  • iphone UITextField remembering user's previous input???

    - by Rob
    Is there a way to remember what the user put into a UITextField and have it displayed the next time they come to that UITextField? i.e. - have them input their name the first time they come to the "Name" UITextField but have that name already displayed in that field the next time they come across that UITextField? I want the name to still be editable if they come back to the UITextField, but inputted nonetheless in case they don't need to change it the second time around.

    Read the article

  • Shouldn’t Bind() pass child control’s values to GridView before Page.PreRender?

    - by SourceC
    hello, For controls such as the GridView, DetailsView, and FormView controls, data-binding expressions are resolved automatically during the control's PreRender event But doesn’t data source control perform updates prior to Page.PreRender event? Meaning, shouldn’t Bind() pass child control’s values to GridView ( so they can be passed to data source control as parameters ) before data source control updates the data source, thus before Page.PreRender event? thanx

    Read the article

  • uploading via http post (multipart/form-data) silently fails with big files

    - by matteo
    When uploading multipart/form-data forms via a http post request to my apache web server, very big files (i.e. 30MB) are silently discarded. On the server side all looks as if the attached file was received with 0 bytes size. On the client side all looks like it had been uploaded succesfully (it takes the expected long time to upload and the browser gives no error message). On the server, nothing is logged into the error log. An entry is logged into the access log as if everything was ok (a post request and a 200 ok response). These uploads are being posted to a php script. In the php script, If I print_r $_FILES, I see the following information for the relevant file: [file5] => Array ( [name] => MOV023.3gp [type] => video/3gpp [tmp_name] => /tmp/phpgOdvYQ [error] => 0 [size] => 0 ) Note both [error] = 0 (which should mean no error) and [size] = 0 (as if the file was empty). My php script runs fine and receives all the rest of the data except these files. move_uploaded_file succeeds on these files and actually copies them as 0byte files. I've already changed the php directives max_upload_size to 50M and post_max_size to 200M, so neither the single file nor the request exceed any size limit. max_execution_time is not relevant, because the time to transfer the data does not count; and I've increased max_input_time to 1000 seconds, though this shouldn't be necessary since this is the time taken to parse the input data, not the time taken to upload it. Is there any apache configuration, prior to php, that could be causing these files to be discarded even prior to php execution? Some limit in size or in upload time? I've read about a default 300 seconds timeout limit, but this should apply to the time the connection is idle, not the time it takes while actually transferring data, right? Needless to say, uploads with all exactly identical conditions (including file format, client and everything) except smaller file size, work seamlessly, so the issue is clearly related to the file or request size, or to the time it takes to send it.

    Read the article

  • ASP.Net Web Farm Monitoring

    - by cisellis
    I am looking for suggestions on doing some simple monitoring of an ASP.Net web farm as close to real-time as possible. The objectives of this question are to: Identify the best way to monitor several Windows Server production boxes during short (minutes long) period of ridiculous load Receive near-real-time feedback on a few key metrics about each box. These are simple metrics available via WMI such as CPU, Memory and Disk Paging. I am defining my time constraints as soon as possible with 120 seconds delayed being the absolute upper limit. Monitor whether any given box is up (with "up" being defined as responding web requests in a reasonable amount of time) Here are more details, things I've tried, etc. I am not interested in logging. We have logging solutions in place. I have looked at solutions such as ELMAH which don't provide much in the way of hardware monitoring and are not visible across an entire web farm. ASP.Net Health Monitoring is too broad, focuses too much on logging and is not acceptable for deep analysis. We are on Amazon Web Services and we have looked into CloudWatch. It looks great but messages in the forum indicate that the metrics are often a few minutes behind, with one thread citing 2 minutes as the absolute soonest you could expect to receive the feedback. This would be good to have for later analysis but does not help us real-time Stuff like JetBrains profiler is good for testing but again, not helpful during real-time monitoring. The closest out-of-box solution I've seen is Nagios which is free and appears to measure key indicators on any kind of box, including Windows. However, it appears to require a Linux box to run itself on and a good deal of manual configuration. I'd prefer to not spend my time mining config files and then be up a creek when it fails in production since Linux is not my main (or even secondary) environment. Are there any out-of-box solutions that I am missing? Obviously a windows-based solution that is easy to setup is ideal. I don't require many bells and whistles. In the absence of an out-of-box solution, it seems easy for me to write something simple to handle what I need. I've been thinking a simple client-server setup where the server requests a few WMI metrics from each client over http and sticks them in a database. We could then monitor the metrics via a query or a dashboard or something. If the client doesn't respond, it's effectively down. Any problems with this, best practices, or other ideas? Thanks for any help/feedback.

    Read the article

  • Get size of UIView after applying CGAffineTransform

    - by Ican Zilb
    I was surprised not to find an answer to this question, maybe is something very simple I somehow overlook : How to get the real size of an UIView after I apply a CGAffineTransform to it? eg. my UIView has size 300 x 200, I apply a scaling transform let's say factor 2 both horizontal and vertical, so the UIView now takes 600 x 400 on the screen, but it's bounds and it's layer's bounds are still returning a size of 300 x 200 ... where do I find the real size of the UIView ?

    Read the article

  • Why is the concept of Marshalling called as such?

    - by chickeninabiscuit
    I've always thought that the concept of Marshalling had a bit of a funny name. My mental conception of the process would always involve an ol' wildwest gunslinging marshall who would coerce objects into serialized form at gunpoint. I just found out the real reason Marshalling is called what it's called and chuckled. Do you know the real reason, or perhaps you too are familiar with my gunslinger?

    Read the article

  • C# Confusing Results from Performance Test

    - by aip.cd.aish
    I am currently working on an image processing application. The application captures images from a webcam and then does some processing on it. The app needs to be real time responsive (ideally < 50ms to process each request). I have been doing some timing tests on the code I have and I found something very interesting (see below). clearLog(); log("Log cleared"); camera.QueryFrame(); camera.QueryFrame(); log("Camera buffer cleared"); Sensor s = t.val; log("Sx: " + S.X + " Sy: " + S.Y); Image<Bgr, Byte> cameraImage = camera.QueryFrame(); log("Camera output acuired for processing"); Each time the log is called the time since the beginning of the processing is displayed. Here is my log output: [3 ms]Log cleared [41 ms]Camera buffer cleared [41 ms]Sx: 589 Sy: 414 [112 ms]Camera output acuired for processing The timings are computed using a StopWatch from System.Diagonostics. QUESTION 1 I find this slightly interesting, since when the same method is called twice it executes in ~40ms and when it is called once the next time it took longer (~70ms). Assigning the value can't really be taking that long right? QUESTION 2 Also the timing for each step recorded above varies from time to time. The values for some steps are sometimes as low as 0ms and sometimes as high as 100ms. Though most of the numbers seem to be relatively consistent. I guess this may be because the CPU was used by some other process in the mean time? (If this is for some other reason, please let me know) Is there some way to ensure that when this function runs, it gets the highest priority? So that the speed test results will be consistently low (in terms of time). EDIT I change the code to remove the two blank query frames from above, so the code is now: clearLog(); log("Log cleared"); Sensor s = t.val; log("Sx: " + S.X + " Sy: " + S.Y); Image<Bgr, Byte> cameraImage = camera.QueryFrame(); log("Camera output acuired for processing"); The timing results are now: [2 ms]Log cleared [3 ms]Sx: 589 Sy: 414 [5 ms]Camera output acuired for processing The next steps now take longer (sometimes, the next step jumps to after 20-30ms, while the next step was previously almost instantaneous). I am guessing this is due to the CPU scheduling. Is there someway I can ensure the CPU does not get scheduled to do something else while it is running through this code?

    Read the article

  • Sql or VB for Access

    - by vijay
    I Have a table in Access as below SI Number Time 1.14172E+20 13:30:35 1244066650 18:58:48 1244066650 19:03:12 1244066650 19:05:50 01724656007_dsl 22:15:20 01724656007_dsl 22:18:00 01724656007_dsl 22:24:28 1141530407 10:27:49 1141530407 10:29:13 And require output in the same table is SI Number Time Diff 1.14172E+20 13:30:35 1244066650 18:58:48 1244066650 19:03:12 0:04:24 1244066650 19:05:50 0:02:38 01724656007_dsl 22:15:20 01724656007_dsl 22:18:00 0:02:40 01724656007_dsl 22:24:28 0:06:28 1141530407 10:27:49 1141530407 10:29:13 0:01:24 I require as if 1st record in SI Number column is equals to 2nd record than record 2 of time column -record 1 of time column in Diff column record 2 record 1 will remain blank Urgent help required Vijay

    Read the article

  • timeIntervalSinceDate Accuracy

    - by mmccomb
    I've been working on a game with an engine that updates 20 times per seconds. I've got to point now where I want to start getting some performance figures and tweak the rendering and logic updates. In order to do so I started to add some timing code to my game loop, implemented as follows... NSDate* startTime = [NSDate date]; // Game update logic here.... // Also timing of smaller internal events NSDate* endTime = [NSDate date]; [endTime timeIntervalSinceDate:startTime]; I noticed however that when I timed blocks within the outer timing logic that the time they took to execute did not sum up to match the overall time taken. So I wrote a small unit test to demonstrate the problem in which I time the overall time taken to complete the test and then 10 smaller events, here it is... - (void)testThatSumOfTimingsMatchesOverallTiming { NSDate* startOfOverallTime = [NSDate date]; // Variable to hold summation of smaller timing events in the upcoming loop... float sumOfIndividualTimes = 0.0; NSTimeInterval times[10] = {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0}; for (int i = 0; i < 10; i++) { NSDate* startOfIndividualTime = [NSDate date]; // Kill some time... sleep(1); NSDate* endOfIndividualTime = [NSDate date]; times[i] = [endOfIndividualTime timeIntervalSinceDate:startOfIndividualTime]; sumOfIndividualTimes += times[i]; } NSDate* endOfOverallTime = [NSDate date]; NSTimeInterval overallTimeTaken = [endOfOverallTime timeIntervalSinceDate:startOfOverallTime]; NSLog(@"Sum of individual times: %fms", sumOfIndividualTimes); NSLog(@"Overall time: %fms", overallTimeTaken); STAssertFalse(TRUE, @""); } And here's the output... Sum of individual times: 10.001377ms Overall time: 10.016834ms Which illustrates my problem quite clearly. The overall time was 0.000012ms but the smaller events took only 0.000001ms. So what happened to the other 0.000011ms? Is there anything that looks particularly wrong with my code? Or is there an alternative timing mechanism I should use?

    Read the article

  • CSLA.net - Inheritable Base classes

    - by JMSA
    I was reading the book "Expert C# 2005 Business Objects". The book describes various base classes to be inherited by various classes to solve real-world problems. But the book does not provide examples of all those classes. Can anyone give me all of those examples (with reason) to better understand CSLA? For example, Which real-world objects are to be considered as Read-only Root Objects (Student/Product/Order, etc.)? And Why?

    Read the article

  • Ubuntu 12.10 Clock is wrong

    - by mardavi
    I have an issue with Ubuntu Quantal, as it shows the wrong time. It is completely messy, the right time from time.is now is 09.43 and my clock shows 17.48. I am using ntp service and I already checked the timezone and it is correct. I also checked the hardware clock through sudo hwclock --showsudo dpkg-reconfigure tzdata and this is right too. I also tried sudo dpkg-reconfigure tzdata but with bad luck. What else can I try? As asked, here my /etc/ntp.conf # /etc/ntp.conf, configuration for ntpd; see ntp.conf(5) for help driftfile /var/lib/ntp/ntp.drift # Enable this if you want statistics to be logged. #statsdir /var/log/ntpstats/ statistics loopstats peerstats clockstats filegen loopstats file loopstats type day enable filegen peerstats file peerstats type day enable filegen clockstats file clockstats type day enable # Specify one or more NTP servers. # Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board # on 2011-02-08 (LP: #104525). See http://www.pool.ntp.org/join.html for # more information. server 0.ubuntu.pool.ntp.org server 1.ubuntu.pool.ntp.org server 2.ubuntu.pool.ntp.org server 3.ubuntu.pool.ntp.org server time.nist.gov # Use Ubuntu's ntp server as a fallback. server ntp.ubuntu.com # Access control configuration; see /usr/share/doc/ntp-doc/html/accopt.html for # details. The web page <http://support.ntp.org/bin/view/Support/AccessRestrictions> # might also be helpful. # # Note that "restrict" applies to both servers and clients, so a configuration # that might be intended to block requests from certain clients could also end # up blocking replies from your own upstream servers. # By default, exchange time with everybody, but don't allow configuration. restrict -4 default kod notrap nomodify nopeer noquery restrict -6 default kod notrap nomodify nopeer noquery # Local users may interrogate the ntp server more closely. restrict 127.0.0.1 restrict ::1 # Clients from this (example!) subnet have unlimited access, but only if # cryptographically authenticated. #restrict 192.168.123.0 mask 255.255.255.0 notrust # If you want to provide time to your local subnet, change the next line. # (Again, the address is an example only.) #broadcast 192.168.123.255 # If you want to listen to time broadcasts on your local subnet, de-comment the # next lines. Please do this only if you trust everybody on the network! #disable auth #broadcastclient In addition, the ntp service was not running when I turned on my laptop today.

    Read the article

  • gcc precompiled headers weird behaviour with -c option

    - by pachanga
    Folks, I'm using gcc-4.4.1 on Linux and before trying precompiled headers in a really large project I decided to test them on simple program. They "kinda work" but I'm not happy with results and I'm sure there is something wrong about my setup. First of all, I wrote a simple program(main.cpp) to test if they work at all: #include <boost/bind.hpp> #include <boost/function.hpp> #include <boost/type_traits.hpp> int main() { return 0; } Then I created the precompiled headers file pre.h(in the same directory) as follows: #include <boost/bind.hpp> #include <boost/function.hpp> #include <boost/type_traits.hpp> ...and compiled it: $ g++ -I. pre.h (pre.h.gch was created) After that I measured compile time with and without precompiled headers: with pch $ time g++ -I. -include pre.h main.cpp real 0m0.128s user 0m0.088s sys 0m0.048s without pch $ time g++ -I. main.cpp real 0m0.838s user 0m0.784s sys 0m0.056s So far so good! Almost 7 times faster, that's impressive! Now let's try something more realistic. All my sources are built with -c option and for some reason I can't make pch play nicely with it. You can reproduce this with the following steps below... I created the test module foo.cpp as follows: #include <boost/bind.hpp> #include <boost/function.hpp> #include <boost/type_traits.hpp> int whatever() { return 0; } Here are the timings of my attempts to build the module foo.cpp with and without pch: with pch $ time g++ -I. -include pre.h -c foo.cpp real 0m0.357s user 0m0.348s sys 0m0.012s without pch $ time g++ -I. -c foo.cpp real 0m0.330s user 0m0.292s sys 0m0.044s That's quite strange, looks like there is no speed up at all!(I ran timings for several times). It turned out precompiled headers were not used at all in this case, I checked it with -H option(output of "g++ -I. -include pre.h -c foo.cpp -H" didn't list pre.h.gch at all). What am I doing wrong?

    Read the article

  • Git for Websites / post-receive / Separation of Test and Production Sites

    - by Walt W
    Hi all, I'm using Git to manage my website's source code and deployment, and currently have the test and live sites running on the same box. Following this resource http://toroid.org/ams/git-website-howto originally, I came up with the following post-receive hook script to differentiate between pushes to my live site and pushes to my test site: while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git --work-tree=c:/temp/BLAH checkout -f master echo "Updated master" ;; refs/heads/testbranch ) git --work-tree=c:/temp/BLAH2 checkout -f testbranch echo "Updated testbranch" ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" However, I have doubts that this is actually safe :) I'm by no means a Git expert, but I am guessing that Git probably keeps track of the current checked-out branch head, and this approach probably has the potential to confuse it to no end. So a few questions: IS this safe? Would a better approach be to have my base repository be the test site repository (with corresponding working directory), and then have that repository push changes to a new live site repository, which has a corresponding working directory to the live site base? This would also allow me to move the production to a different server and keep the deployment chain intact. Is there something I'm missing? Is there a different, clean way to differentiate between test and production deployments when using Git for managing websites? As an additional note in light of Vi's answer, is there a good way to do this that would handle deletions without mucking with the file system much? Thank you, -Walt PS - The script I came up with for the multiple repos (and am using unless I hear better) is as follows: sitename=`basename \`pwd\`` while read ref do #echo "Ref updated:" #echo $ref -- would print something like example at top of file result=`echo $ref | gawk -F' ' '{ print $3 }'` if [ $result != "" ]; then echo "Branch found: " echo $result case $result in refs/heads/master ) git checkout -q -f master if [ $? -eq 0 ]; then echo "Test Site checked out properly" else echo "Failed to checkout test site!" fi ;; refs/heads/live-site ) git push -q ../Live/$sitename live-site:master if [ $? -eq 0 ]; then echo "Live Site received updates properly" else echo "Failed to push updates to Live Site" fi ;; * ) echo "No update known for $result" ;; esac fi done echo "Post-receive updates complete" And then the repo in ../Live/$sitename (these are "bare" repos with working trees added after init) has the basic post-receive: git checkout -f if [ $? -eq 0 ]; then echo "Live site `basename \`pwd\`` checked out successfully" else echo "Live site failed to checkout" fi

    Read the article

  • Soon to be PhD in Computer Science - Which Path to Follow?

    - by mttr
    I am going to submit my PhD thesis within the next six months. My PhD is on managing the availabiity of large-scale distributed systems, so I have some experience actually building non-trivial systems (+ I have four years experience working as a programmer). I am now trying to figure out what I should do following the PhD. I enjoy research (a quick definition: identify problem, come up with solution, ask interesting questions, find ways to answer them, build system, experiment, contribute some new knowledge and publish). I also like teaching and supervising students. It would seem that a career in academia is the ideal thing to do (can work on non-trivial problems and contribute something of use to some or more people). However, a career in academia has two significant drawbacks. First, it can be difficult to gain access to real systems with real users which then display real problems. This creates the danger that you do work that seems important (to you and maybe to some of your colleagues), but is not really relevant to anything or anyone. Second, the pay is pretty sad. Apparently, you have to sacrifice this for the privilege of doing research. I enjoy programming, but don't just want to hack some web-based system for the rest of my life. That is, working in IT for a bank is not a future I see myself enjoying. I want to work on interesting problms (that's difficult to define clearly): things where you don't know how to start, that take some time to figure out and attack, that require a rigorous approach to demonstrate that the problem has been solved, and problems that need a solution in the real world. Give the experience of people on stackoverflow, what do you think suitable options are and why (or alternatively, what gaps in my thinking does the above reveal)? Is industrial research (aka IBM Research, Microsoft Research) the only alternative avenue to a career in academia? What other areas, companies, occupations, etc. could provide me with stimulating, inspiring work? Which regions, countries am I most likely to find such work? Please share your experience.

    Read the article

  • How to pass parameters to stored procedure ?

    - by Kumara
    I used SQL Server 2005 for my small web application. I Want pass parameters to SP . But there is one condition. number of parameter that can be change time to time. Think ,this time i pass neme and Address , next time i pass name,sirname,address , this parameter range may be 1-30 , Please send any answer if you have,Thanks

    Read the article

  • Endpoints or URIs for a WCF client test-drive

    - by Xencor
    I am aware of the Amazon.com exposed URIs ... which I need to sign up for and then on I can use them ... roll-up my sleeves and get some WCF Client test-drive coding. What are the other such publicly exposed end points that reflect real or almost real-time services? Any offerings specifically from Microsoft? I am basically looking for writing WCF clients for both WCF and non-WCF services...RESTful ones and even otherwise.

    Read the article

  • Mysql - Summary Tables

    - by jwzk
    Which method do you suggest and why? Creating a summary table and . . . 1) Updating the table as the action occurs in real time. 2) Running group by queries every 15 minutes to update the summary table. 3) Something else? The data must be near real time, it can't wait an hour, a day, etc.

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >