Search Results

Search found 3358 results on 135 pages for 'patch releases'.

Page 65/135 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • multiline sed using backreferences...

    - by pagid
    Hi, I'm converting patch scripts using a commandline script - within these scripts there's the combination two lines like: --- /dev/null +++ filename.txt which needs to be converted to: --- filename.txt +++ filename.txt Initially I tried: less file.diff | sed -e "s/---\/dev\null\n+++ \(.*\)/--- \1\n+++ \1/" But I had to find out that multiline-handling is much more complex in sed :( Any help is appreciated...

    Read the article

  • visual studio crashes when i view a javascript file

    - by oo
    i start up an existing solution, click on a javascript file, the file opens up in the IDE for a few seconds and then visual studio disappears. This is consistent and reproducible. I saw this patch for KB958502 and installed it but it didn't seem to do anything. Any suggestions on how i can proceed as this is completely stopping my development here.

    Read the article

  • Solr sort by function

    - by spacemonkey
    Hi, I need to sort query results by the output of some function which takes "score" and couple other fields as an input (50% of the total score comes from similarity score and 50% comes from document's popularity). Is there a way to do this without having to install "Sort by Function" patch? Thanks!

    Read the article

  • Best practices for QA / testing in an Agile (Scrum+XP) team?

    - by Srirangan
    Hey guys, We're getting a QA for the first time in our project. We're not sure how to best use him. We work in an Agile environment. Pair programming, user stories, short sprints (two weeks), daily stand-ups, retrospectives, planning meetings, quick releases etc. One obviously way to use a tester is to verify bugs fixes and user stories every sprint. Are there any better ways for an Agile team to utilize a tester. Thanks, Sri

    Read the article

  • SVN export just the changed files from tags

    - by Eric
    Does anyone know how to export only the changed files from two tags using svn? Lets say I have tag 1.0 and then later fix bugs in the trunk. Next I am ready for a new patch release so I tag it 1.1. Now I want to export the changed files between tag 1.0 and 1.1. Is this possible?

    Read the article

  • Why wait should always be in synchronized block

    - by diy
    Hi gents, We all know that in order to invoke Object.wait() , this call must be placed in synchronized block,otherwise,IllegalMonitorStateException is thrown.But what's the reason for making this restriction?I know that wait() releases the monitor, but why do we need to explicitly acquire the monitor by making particular block synchronized and then release the monitor by calling wait() ? What is the potential damage if it was possible to invoke wait() outside synch block, retaining it's semantics - suspending the caller thread ? Thanks in advance

    Read the article

  • Does Microsoft make available the .obj files for its CRT versions to enable whole program optimizati

    - by Leeks and Leaks
    Given the potential performance improvements from LTCG (link time code generation, or whole program optimization), which requires the availability of .obj files, does Microsoft make available the .obj files for the various flavors of its MSVCRT releases? One would think this would be a good place for some potential gain. Not sure what they have to lose since the IL that is generated in the .obj files is not documented and processor specific.

    Read the article

  • upgrading boost version

    - by idimba
    I'm using RHEL 5.3, shipped with gcc 4.1.2 and boost 1.33. So, there's no boost::unorded_map, no make_shared() factory function to create boost::shared_ptr and other features available in newer releases of boost. Is there're a newer version of boost compatible with the version of gcc? If yes, how the upgrade is performed?

    Read the article

  • Reordering of commits

    - by Peter
    Hi, I'm currently working on a branch and want some commits to merge into other branches: /--a--b--c--d--e--f--g----- branchA (letters are commits) --o--------------------------- master -------------------------- branchB However I noticed that it would be a good idea to pool some commits. I want to "concatenate" commit a, d, e and g into one patch and commit it to master. Commits b and f should go as one commit to branchB. Is there a good 'git'-ish way to achieve it?

    Read the article

  • HOW TO FIX "The requested URL /phpMyAdmin was not found on this server"

    - by user1392840
    I have install apache,php and mysql on Mac 10.8.1. After this, in my web brower i type this, it give the error message Not Found The requested URL /News-2012-Academy-Awards-53.html was not found on this server. Apache/2.2.22 (Unix) mod_fastcgi/2.4.6 mod_ssl/2.2.22 OpenSSL/0.9.8r DAV/2 PHP/5.3.13 with Suhosin-Patch mod_wsgi/3.3 Python/2.7.2 Server at clontarf.girlsacademy.com.au Port 80" Please help me to solve it.

    Read the article

  • Managing code transitions between developers

    - by gAMBOOKa
    What are your best practices for making sure newly hired developers quickly get up to speed with the code? And ensuring developers moving on don't set back ongoing releases. Some ideas to get started: Documentation Use well established frameworks Training / encourage mentoring Notice period in contract

    Read the article

  • Scenarios for Bazaar and SVN interaction

    - by Adam Badura
    At our company we are using SVN repository. I'm doing programming from both work (main place) and home (mostly experiments and refactoring). Those are two different machines, in different networks and almost never turned on at the same time (after all I'm either at work or at home...) I wanted to give a chance to some distributed version control system and solve some of the issues associated with SVN based process and having two machines. From git, Mercurial and Bazaar I chose to start with Bazaar since it claims that it is designed do be used by human beings. Its my first time with distributed system and having nice and easy user interface was important for me. Features I wanted to achieve were: Being able to update from SVN repository and commit to it. Being able to commit locally steps of my work on a task. Being able to have few separate tasks at the same time in their own local branches. Being able to share those branches between my work and home computer. As a means of transport between work and home computer I wanted to use a pen-drive. Company server will not work since I may not instal there anything. Neither will work a web service repository as I may not upload source code to web (especially if it would be public which seems to be a common case in free web services). This transport should be Bazaar-based (or what ever else I will end with) so it can be done more or less automatically but manual copying and pasting some folders or generating patch files (providing they would work - I have bad experience with patch files in SVN) would work as well if there is no better solution. Yet the pen-drive should only be used for transportation. I do not want to edit or build there. I tried following Bazaar guidelines for integration with SVN. But I failed. I tried both bzr svn-import and bzr checkout providing URL from my repository as both https://... and svn+https://.... In some cases it had some issues with certificates but the output specified argument to ignore them so I did that. Sometimes it asked me to log in (in other cases maybe it remembered... I don't know) which I did. All were running very slow (this could be our server issue) and at some point were interrupted due to connection interruption (this almost for sure is our server issue: it truncates the connection after some time). But since (as opposed to SVN) restarting starts a new rather than from point where it was interrupted I was unable to reach all the ~19000 revisions (ending usually somewhere around 150). What and how should I do with Bazaar? Is is possible to somehow import SVN repository from the local checkout (so that I do not suffer the connection truncation)? I was told that a colleague that used to work with us has done something similar (importing SVN repository with full history) with Mercurial like in no time. So I'm seriously considering now trying Mercurial even if only to see if that will work. But also what are your general guidelines to achieve the listed features?

    Read the article

  • Since upgrading to Solaris 11, my ARC size has consistently targeted 119MB, despite having 30GB RAM. What? Why?

    - by growse
    I ran a NAS/SAN box on Solaris 11 Express before Solaris 11 was released. The box is an HP X1600 with an attached D2700. In all, 12x 1TB 7200 SATA disks, 12x 300GB 10k SAS disks in separate zpools. Total RAM is 30GB. Services provided are CIFS, NFS and iSCSI. All was well, and I had a ZFS memory usage graph looking like this: A fairly healthy Arc size of around 23GB - making use of the available memory for caching. However, I then upgraded to Solaris 11 when that came out. Now, my graph looks like this: Partial output of arc_summary.pl is: System Memory: Physical RAM: 30701 MB Free Memory : 26719 MB LotsFree: 479 MB ZFS Tunables (/etc/system): ARC Size: Current Size: 915 MB (arcsize) Target Size (Adaptive): 119 MB (c) Min Size (Hard Limit): 64 MB (zfs_arc_min) Max Size (Hard Limit): 29677 MB (zfs_arc_max) It's targetting 119MB while sitting at 915MB. It's got 30GB to play with. Why? Did they change something? Edit To clarify, arc_summary.pl is Ben Rockwood's, and the relevent lines generating the above stats are: my $mru_size = ${Kstat}->{zfs}->{0}->{arcstats}->{p}; my $target_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c}; my $arc_min_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c_min}; my $arc_max_size = ${Kstat}->{zfs}->{0}->{arcstats}->{c_max}; my $arc_size = ${Kstat}->{zfs}->{0}->{arcstats}->{size}; The Kstat entries are there, I'm just getting odd values out of them. Edit 2 I've just re-measured the arc size with arc_summary.pl - I've verified these numbers with kstat: System Memory: Physical RAM: 30701 MB Free Memory : 26697 MB LotsFree: 479 MB ZFS Tunables (/etc/system): ARC Size: Current Size: 744 MB (arcsize) Target Size (Adaptive): 119 MB (c) Min Size (Hard Limit): 64 MB (zfs_arc_min) Max Size (Hard Limit): 29677 MB (zfs_arc_max) The thing that strikes me is that the Target Size is 119MB. Looking at the graph, it's targeted the exact same value (124.91M according to cacti, 119M according to arc_summary.pl - think the difference is just 1024/1000 rounding issues) ever since Solaris 11 was installed. It looks like the kernel's making zero effort to shift the target size to anything different. The current size is fluctuating as the needs of the system (large) fight with the target size, and it appears equilibrium is between 700 and 1000MB. So the question is now a little more pointed - why is Solaris 11 hard setting my ARC target size to 119MB, and how do I change it? Should I raise the min size to see what happens? I've stuck the output of kstat -n arcstats over at http://pastebin.com/WHPimhfg Edit 3 Ok, weirdness now. I know flibflob mentioned that there was a patch to fix this. I haven't applied this patch yet (still sorting out internal support issues) and I've not applied any other software updates. Last thursday, the box crashed. As in, completely stopped responding to everything. When I rebooted it, it came back up fine, but here's what my graph now looks like. It seems to have fixed the problem. This is proper la la land stuff now. I've literally no idea what's going on. :(

    Read the article

  • NGINX - CORS error affecting only Firefox

    - by wiherek
    this is an issue with Nginx that affects only firefox. I have this config: http://pastebin.com/q6Yeqxv9 upstream connect { server 127.0.0.1:8080; } server { server_name admin.example.com www.admin.example.com; listen 80; return 301 https://admin.example.com$request_uri; } server { listen 80; server_name ankieta.example.com www.ankieta.example.com; add_header Access-Control-Allow-Origin $http_origin; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, PATCH, DELETE'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Headers' 'Access-Control-Request-Method,Access-Control-Request-Headers,Cache,Pragma,Authorization,Accept,Accept-Encoding,Accept-Language,Host,Referer,Content-Length,Origin,DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; return 301 https://ankieta.example.com$request_uri; } server { server_name admin.example.com; listen 443 ssl; ssl_certificate /srv/ssl/14182263.pem; ssl_certificate_key /srv/ssl/admin_i_ankieta.example.com.key; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM; location / { proxy_pass http://connect; } } server { server_name ankieta.example.com; listen 443 ssl; ssl_certificate /srv/ssl/14182263.pem; ssl_certificate_key /srv/ssl/admin_i_ankieta.example.com.key; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM; root /srv/limesurvey; index index.php; add_header 'Access-Control-Allow-Origin' $http_origin; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, PATCH, DELETE'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Headers' 'Access-Control-Request-Method,Access-Control-Request-Headers,Cache,Pragma,Authorization,Accept,Accept-Encoding,Accept-Language,Host,Referer,Content-Length,Origin,DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; client_max_body_size 4M; location / { try_files $uri $uri/ /index.php?q=$uri&$args; } location ~ /*.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; #NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini include fastcgi_params; fastcgi_param SCRIPT_FILENAME /srv/limesurvey$fastcgi_script_name; # fastcgi_param HTTPS $https; fastcgi_intercept_errors on; fastcgi_pass 127.0.0.1:9000; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires max; log_not_found off; } } this is basically an AngularJS app and a PHP app (LimeSurvey), served under two different domains by the same webserver (Nginx). AngularJS is in fact served by ConnectJS, which is proxied to by Nginx (ConnectJS listens only on localhost). In Firefox console I get this: Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://ankieta.example.com/admin/remotecontrol. This can be fixed by moving the resource to the same domain or enabling CORS. which of course is annoying. Other browsers work fine (Chrome, IE). Any suggestions on this?

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >