Search Results

Search found 17317 results on 693 pages for 'memory upgrade'.

Page 215/693 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Getting MSDeploy working on our build/integration server - Is an MSBuild upgrade necessary?

    - by Jeff D
    We have what I think is a fairly standard build process: 1. Developer: Check in code 2. Build: Polls repo, sees change, and kicks off build that: 3. Build: Updates from repo, Builds w/ MSBuild, Runs unit tests w/ nunit, 4. Build: creates installer package Our security team allows us to pull from the build server, but does not allow the build server to push. So we generally rdp in, d/l the installers, and run them, which rules out the slick deployment services, so I would need to generate packages instead. I'd like to use MSDeploy, except that we have the following issues: We're on .net 3.5, and the MSBuild target (Package) that uses MSDeploy requires 4.0. Is there anything I'd need to install other than .net 4.0 RC for this? (Would MSBuild be part of that upgrade?) When I generate packages with MSDeploy, I see that I don't have just 1 file. There's a zip, deploy.cmd, SourceManifest.xml, and SetParameters.xml. What are all the other files for, and why wouldn't they all be in the 'package'? It sounds as if you can create packages by telling the system to look at a working IIS site. But if the packages are build from a CI environment, aren't you basically out of luck here? It feels like they designed some of this for small-scale developers deploying from their dev environment. That's a fine use case, but I'm interested in see what everyone's enterprise-experience is with the tool Any suggestions?

    Read the article

  • How to solve "java.io.IOException: error=12, Cannot allocate memory" calling Runtime#exec()?

    - by Andrea Francia
    On my system I can't run a simple Java application that start a process. I don't know how to solve. Could you give me some hints how to solve? The program is: [root@newton sisma-acquirer]# cat prova.java import java.io.IOException; public class prova { public static void main(String[] args) throws IOException { Runtime.getRuntime().exec("ls"); } } The result is: [root@newton sisma-acquirer]# javac prova.java && java -cp . prova Exception in thread "main" java.io.IOException: Cannot run program "ls": java.io.IOException: error=12, Cannot allocate memory at java.lang.ProcessBuilder.start(ProcessBuilder.java:474) at java.lang.Runtime.exec(Runtime.java:610) at java.lang.Runtime.exec(Runtime.java:448) at java.lang.Runtime.exec(Runtime.java:345) at prova.main(prova.java:6) Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory at java.lang.UNIXProcess.<init>(UNIXProcess.java:164) at java.lang.ProcessImpl.start(ProcessImpl.java:81) at java.lang.ProcessBuilder.start(ProcessBuilder.java:467) ... 4 more Configuration of the system: [root@newton sisma-acquirer]# java -version java version "1.6.0_0" OpenJDK Runtime Environment (IcedTea6 1.5) (fedora-18.b16.fc10-i386) OpenJDK Client VM (build 14.0-b15, mixed mode) [root@newton sisma-acquirer]# cat /etc/fedora-release Fedora release 10 (Cambridge) EDIT: Solution This solves my problem, I don't know exactly why: echo 0 /proc/sys/vm/overcommit_memory Up-votes for who is able to explain :) Additional informations, top output: top - 13:35:38 up 40 min, 2 users, load average: 0.43, 0.19, 0.12 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 1.5%us, 0.5%sy, 0.0%ni, 94.8%id, 3.2%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1033456k total, 587672k used, 445784k free, 51672k buffers Swap: 2031608k total, 0k used, 2031608k free, 188108k cached Additional informations, free output: [root@newton sisma-acquirer]# free total used free shared buffers cached Mem: 1033456 588548 444908 0 51704 188292 -/+ buffers/cache: 348552 684904 Swap: 2031608 0 2031608

    Read the article

  • Upgrade to Delphi 2010, or stick with Delphi 7 "forever"?

    - by tim11g
    I am an individual user of Delphi, starting back in the early Turbo Pascal days. I have quite a bit of code developed over the years, but I have never sold software commercially or used it for business. Historically, Borland supported the non-professional users with lower cost versions, but Embarcadero does not. As I consider upgrading to Delphi 2010, I am put off by the high price. Embarcadero is also trying to "encourage" upgrading by threatening to charge "new user" prices for upgrades after Dec 31st. I have several questions for the community to help me decide whether to upgrade. 1) I have read about difficulties updating existing code to support the unicode string types. I have no need for unicode strings, and I am happy with the string support in D7. Will I have to modify existing code and components just to re-compile under D2010? Or are there compiler options to allow backward compatibility if new string types are not required? 2) The main reason I'm considering upgrading is for IDE improvements, and to get access to new APIs added to Windows since 2002. Are there any Windows 7 APIs or capabilities that would be impossible to support from my programs compiled using using Delphi 7 (assuming appropriate JEDI API libraries, for example)? 3) Is there anything else about Delphi 2010 that is really compelling for someone who is primarily interested in Win32 apps, and not working with databases? I have read that D2010 is slow to load, and other versions between D7 and D2010 have had stability issues, and the help system was "broken". What is the biggest benefit to D2010?

    Read the article

  • Why my UTableView with style UITableViewStyleGrouped is consuming memory?

    - by prathumca
    Hello everyone, Currently in my app, I'm using an UITableView with style UITableViewStyleGrouped as shown below. CGRect imgFrame = CGRectMake(0, 0, 320, 650); UITableView *myTable = [[UITableView alloc] initWithFrame:imgFrame style:UITableViewStyleGrouped]; myTable.dataSource = self; myTable.delegate = self; //make the current object the event handler for view [self.view addSubview:myTable]; [myTable release]; And the data has stored in an array "dataArray". dataArray has collection of arrays, where each array represent a section. Currently I have only one section with 100 records. When I installed my app onto my IPhone, I observed that this UITableView is consuming 20 MB of IPhone memory. If I changed the table view style to "UITableViewStylePlain", then it is consuming only 4MB of memory. I'm trying to figure it out, where is the exact problem, but not. What was wrong with "UITableViewStyleGrouped"? Regards, prathumca.

    Read the article

  • what webserver / mod / technique should I use to serve everything from memory?

    - by reinier
    I've lots of lookuptables from which I'll generate my webresponse. I think IIS with Asp.net enables me to keep static lookuptables in memory which I can use to serve up my responses very fast. Are there however also non .net solutions which can do the same? I've looked at fastcgi, but I think this starts X processes, of which anyone can handle Y requests. But the processes are by definition shielded from eachother. I could configure fastcgi to use just 1 process, but does this have scalability implications? anything using PHP or any other interpreted language won't fly because it is also cgi or fastcgi bound right? I understand memcache could be an option, though this would require another (local) socket connection which I'd rather avoid since everything in memory would be much faster. The solution can work under WIndows or Unix... it doesn't matter too much. The only thing which matters is that there will be a lot of requests (100/sec now and growing to 500/sec in a year), and I want to reduce the amount of webservers needed to process it. The current solution is done using PHP and memcache (and the occasional hit to the SQL server backend). Although it is fast (for php anyway), Apache has real problems when the 50/sec is passed. I've put a bounty on this question since I've not seen enough responses to make a wise choice. At the moment I'm considering either Asp.net or fastcgi with C(++).

    Read the article

  • _dl_runtime_resolve -- When do the shared objects get loaded in to memory?

    - by windfinder
    We have a message processing system with high performance demands. Recently we have noticed that the first message takes many times longer then subsequent messages. A bunch of transformation and message augmentation happens as this goes through our system, much of it done by way of external lib. I just profiled this issue (using callgrind), comparing a "run" of just one message with a "run" of many messages (providing a baseline of comparison). The main difference I see is the function "do_lookup_x" taking up a huge amount of time. Looking at the various calls to this function, they all seem to be called by the common function: _dl_runtime_resolve. Not sure what this function does, but to me this looks like the first time the various shared libraries are being used, and are then being loaded in to memory by the ld. Is this a correct assumption? That the binary will not load the shared libraries in to memory until they are being prepped for use, therefore we will see a massive slowdown on the first message, but on none of the subsequent? How do we go about avoiding this? Note: We operate on the microsecond scale.

    Read the article

  • Is there a way to programmatically tell if particular block of memory was not freed by FastMM?

    - by Wodzu
    I am trying to detect if a block of memory was not freed. Of course, the manager tells me that by dialog box or log file, but what if I would like to store results in a database? For example I would like to have in a database table a names of routines which allocated given blocks. After reading a documentation of FastMM I know that since version 4.98 we have a possibility to be notified by manager about memory allocations, frees and reallocations as they occur. For example OnDebugFreeMemFinish event is passing to us a PFullDebugBlockHeader which contains useful informations. There is one thing that PFullDebugBlockHeader is missing - the information if the given block was freed by the application. Unless OnDebugFreeMemFinish is called only for not freed blocks? This is which I do not know and would like to find out. The problem is that even hooking into OnDebugFreeMemFinish event I was unable to find out if the block was freed or not. Here is an example: program MemLeakTest; {$APPTYPE CONSOLE} uses FastMM4, ExceptionLog, SysUtils; procedure MemFreeEvent(APHeaderFreedBlock: PFullDebugBlockHeader; AResult: Integer); begin //This is executed at the end, but how should I know that this block should be freed //by application? Unless this is executed ONLY for not freed blocks. end; procedure Leak; var MyObject: TObject; begin MyObject := TObject.Create; end; begin OnDebugFreeMemFinish := MemFreeEvent; Leak; end. What I am missing is the callback like: procedure OnMemoryLeak(APointer: PFullDebugBlockHeader); After browsing the source of FastMM I saw that there is a procedure: procedure LogMemoryLeakOrAllocatedBlock(APointer: PFullDebugBlockHeader; IsALeak: Boolean); which could be overriden, but maybe there is an easier way?

    Read the article

  • Question about memory allocation when initializing char arrays in C/C++.

    - by Carlos Nunez
    Before anything, I apologize if this question has been asked before. I am programming a simple packet sniffer for a class project. For a little while, I ran into the issue where the source and destination of a packet appeared to be the same. For example, the source and destination of an Ethernet frame would be the same MAC address all of the time. I custom-made ether_ntoa(char *) because Windows does not seem to have ethernet.h like Linux does. Code snippet is below: char *ether_ntoa(u_char etheraddr[ETHER_ADDR_LEN]) { int i, j; char eout[32]; for(i = 0, j = 0; i < 5; i++) { eout[j++] = etheraddr[i] >> 4; eout[j++] = etheraddr[i] & 0xF; eout[j++] = ':'; } eout[j++] = etheraddr[i] >> 4; eout[j++] = etheraddr[i] & 0xF; eout[j++] = '\0'; for(i = 0; i < 17; i++) { if(eout[i] < 10) eout[i] += 0x30; else if(eout[i] < 16) eout[i] += 0x57; } return(eout); } I solved the problem by using malloc() to have the compiler assign memory (i.e. instead of char eout[32], I used char * eout; eout = (char *) malloc (32);). However, I thought that the compiler assigned different memory locations when one sized a char-array at compile time. Is this incorrect? Thanks! Carlos Nunez

    Read the article

  • How to store an interger value of 4 bytes in a memory of chunk which is malloced as type char

    - by Adi
    Dear all, Hello Guys!! This is my first post in the forum . I am really looking forward to having good fun in this site. My question is : int mem_size = 10; char *start_ptr; if((start_ptr= malloc(mem_size*1024*1024*sizeof(char)))==NULL) {return -1;} I have allocated a chunk of memory of type char and size is say 10 MB (i.e mem_size = 10 ); Now I want to store the size information in the header of the memory chunk, To make myself more clear Lets Say : start_ptr = 0xaf868004 (This is the value I got from my execution, it changes every time) Now I want to put the size information in the start of this pointer.. i.e *start_ptr = mem_size*1024*1024; But I am not able to put this information in the start_ptr. I think the reason is because my ptr is of type char which only takes one byte but I am trying to store int which takes 4 bytes, is the problem . I am not sure how to fix this problem.. I would greatly appreciate your suggestions. Cheers!! Aditya

    Read the article

  • How much is too much memory allocation in NDK?

    - by Maximus
    The NDK download page notes that, "Typical good candidates for the NDK are self-contained, CPU-intensive operations that don't allocate much memory, such as signal processing, physics simulation, and so on." I came from a C background and was excited to try to use the NDK to operate most of my OpenGL ES functions and any native functions related to physics, animation of vertices, etc... I'm finding that I'm relying quite a bit on Native code and wondering if I may be making some mistakes. I've had no trouble with testing at this point, but I'm curious if I may run into problems in the future. For example, I have game struct defined (somewhat like is seen in the San-Angeles example). I'm loading vertex information for objects dynamically (just what is needed for an active game area) so there's quite a bit of memory allocation happening for vertices, normals, texture coordinates, indices and texture graphic data... just to name the essentials. I'm quite careful about freeing what is allocated between game areas. Would I be safer setting some caps on array sizes or should I charge bravely forward as I'm going now?

    Read the article

  • CArray doesn't call copy constructors on memory reallocations, now what?

    - by MMx
    Suppose I have a class that requires copy constructor to be called to make a correct copy of: struct CWeird { CWeird() { number = 47; target = &number; } CWeird(const CWeird &other) : number(other.number), target(&number) { } void output() { printf("%d %d\n", *target, number); } int *target, number; }; Now the trouble is that CArray doesn't call copy constructors on its elements when reallocating memory (only memcpy from the old memory to the new), e.g. this code CArray<CWeird> a; a.SetSize(1); a[0].output(); a.SetSize(2); a[0].output(); results in 47 47 -572662307 47 I don't get this. Why is it that std::vector can copy the same objects properly and CArray can't? What's the lesson here? Should I use only classes that don't require explicit copy constructors? Or is it a bad idea to use CArray for anything serious?

    Read the article

  • nginx proxying websockets, must be missing something

    - by CodeMonkey
    I have a basic chat app written in node.js using express and socket.io; it works fine when connecting directly to node on port 3000 But doesn't work when I try to use nginx v1.4.2 as a proxy. I start off using the connection map map $http_upgrade $connection_upgrade { default upgrade; '' close; } Then add the locations location /socket.io/ { proxy_pass http://node; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Request-Id $txid; proxy_set_header X-Session-Id $uid_set+$uid_got; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_buffering off; proxy_read_timeout 86400; keepalive_timeout 90; proxy_cache off; access_log /var/log/nginx/webservice.access.log; error_log /var/log/nginx/webservice.error.log; } location /web-service/ { proxy_pass http://node; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Request-Id $txid; proxy_set_header X-Session-Id $uid_set+$uid_got; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_buffering off; proxy_read_timeout 86400; keepalive_timeout 90; access_log /var/log/nginx/webservice.access.log; error_log /var/log/nginx/webservice.error.log; rewrite /web-service/(.*) /$1 break; proxy_cache off; } These are built up using all of the tips to get it working that I could find. The error log does not show any errors. (except when I stop node to test the error logging is working) When through nginx I do see a websocket connection in the dev tools, with the status of 101; but the frames tab under the resuects is empty. The only differnece I can see in the response headers is a case difference - "upgrade" vs "Upgrade" - through nginx : Connection:upgrade Date:Fri, 08 Nov 2013 11:49:25 GMT Sec-WebSocket-Accept:LGB+iEBb8Ql9zYfqNfuuXzdzjgg= Server:nginx/1.4.2 Upgrade:websocket direct from node Connection:Upgrade Sec-WebSocket-Accept:8nwPpvg+4wKMOyQBEvxWXutd8YY= Upgrade:websocket output from node (when used through nginx) debug - served static content /socket.io.js debug - client authorized info - handshake authorized iaej2VQlsbLFIhachyb1 debug - setting request GET /socket.io/1/websocket/iaej2VQlsbLFIhachyb1 debug - set heartbeat interval for client iaej2VQlsbLFIhachyb1 debug - client authorized for debug - websocket writing 1:: debug - websocket writing 5:::{"name":"message","args":[{"message":"welcome to the chat"}]} debug - clearing poll timeout debug - jsonppolling writing io.j[0]("8::"); debug - set close timeout for client 7My3F4CuvZC0I4Olhybz debug - jsonppolling closed due to exceeded duration debug - clearing poll timeout debug - jsonppolling writing io.j[0]("8::"); debug - set close timeout for client AkCYl0nWNZAHeyUihyb0 debug - jsonppolling closed due to exceeded duration debug - setting request GET /socket.io/1/xhr-polling/iaej2VQlsbLFIhachyb1?t=1383911206158 debug - setting poll timeout debug - discarding transport debug - cleared heartbeat interval for client iaej2VQlsbLFIhachyb1 debug - setting request GET /socket.io/1/jsonp-polling/iaej2VQlsbLFIhachyb1?t=1383911216160&i=0 debug - setting poll timeout debug - discarding transport debug - clearing poll timeout debug - clearing poll timeout debug - jsonppolling writing io.j[0]("8::"); debug - set close timeout for client iaej2VQlsbLFIhachyb1 debug - jsonppolling closed due to exceeded duration debug - setting request GET /socket.io/1/jsonp-polling/iaej2VQlsbLFIhachyb1?t=1383911236429&i=0 debug - setting poll timeout debug - discarding transport debug - cleared close timeout for client iaej2VQlsbLFIhachyb1 when direct to node, the client does not start polling. The normal http stuff node outputs works fine with nginx. Clearly something I am not seeing, but I am stuck, thanks :)

    Read the article

  • Verizon HTC Eris - No sound on incoming phone call after 2.1 droid upgrade. Help!?

    - by Michael Rosario
    Has anyone had the following issue? I've had several issues as well: No sound when call connects, ringing or people talking. Apps would force close like weather. I did call HTC support and they had me go into Menu, Settings, Manage Applications and then clear the cache of the problem app. They also had me clear the cache once the browser was open and then do a soft reset (power off the phone and take the battery out for 15 seconds) This did fix some issues, but I am constantly turning my phone on and off to get sound back on call or to make the assigned ringtones work. There's no rhyme or reason as to why they stop working... Anyone else tried anything different??? Related problem statement... http://community.htc.com/na/htc-forums/android/f/32/p/2601/10344.aspx#10344 My wife and I are most concerned about the incoming call issue.

    Read the article

  • Upgrade AirPort on Macs to support Snow Leopard's Wake on Wireless/WLAN?

    - by wojo
    Snow Leopard now supports Wake on WLAN, but not all hardware supports this. For example, my Octo Mac Pro from early 2008 has an AirPort card, but it does not support this. Nor does my 2007 2.33GHz MacBook Pro. For reference to what is needed, look at http://www.macrumors.com/2009/08/28/a-closer-look-at-snow-leopards-wake-on-demand-feature/ which includes a screenshot of what the System Profiler should show. It's pretty hard to find Apple parts, but is it possible to put newer cards into these machines to have them support Wake on Wireless?

    Read the article

  • seeking to upgrade my bash magic. help decipher this command: bash -s stable

    - by tim
    ok so i'm working through a tutorial to get rvm installed on my mac. the bash command to get rvm via curl is curl -L https://get.rvm.io | bash -s stable i understand the first half's curl command at location rvm.io, and that the result is piped to the subsequent bash command, but i'm not sure what that command is doing. My questions: -s : im always confused about how to refer to these. what type of thing is this: a command line argument? a switch? something else? -s : what is it doing? i have googled for about half an hour but not sure how to refer to it makes it difficult. stable : what is this? tl;dr : help me decipher the command bash -s stable to those answering this post, i aspire to one day be as bash literate as you. until then, opstards such as myself thank you for the help!

    Read the article

  • On an HTC Hero - with provider Orange in the UK - can one upgrade the OS?

    - by user32648
    Currently it's running Android 1.5 - I'd clearly prefer to be on 1.6, 2.0 or 2.1 - but there seems to be limited information on this available on the internet. If anyone can confirm whether it's been done before, what versions are compatible with the handset, and any problems, it'd be really useful. Aside - it's kind of poor that a phone sold in March 2010 only runs Android 1.5... :(

    Read the article

  • Should I upgrade my VPS to a dedicated host?

    - by Mr. Hedgehog
    I've been running a website for 4 months now and in that time, it has grown from a few hundred unique visits/day to about 30,000 unique visits/day. I have been on Linode the whole time and have steadily upgraded to a 4GB node. Unfortunately, my app is very MySQL heavy and even with caching, the server is frequently ceasing to respond in peak times. During these times, all 4 cores are maxed on the machine (though Linode 4GB only provides a 1/5 share of the CPU). The only way to get it running again is to restart MySQL. I don't really know what to do. I have tried optimising the InnoDB settings and it seems to not be helping. Would moving to a Dedicated host be a good solution? I can get 4-5x the power of my node for a similar price - and with decent providers too.

    Read the article

  • Intel video driver upgrade now makes my external monitor display red-only with HDMI cable.

    - by Eli
    I have an HP G60 notebook with a Mobile Intel 4 Series Express Chipset video display (Driver version is 8.15.10.2021). OS is Win7 Home Premium. I also have an LG widescreen display that I was connecting to the laptop using an HDMI cable. It was working great, until I applied an Intel 4 Series Express Chipset update from Windows Update. After the update, if I plug the laptop to the monitor using the HDMI cable, it appears I only get the Magenta channel, cause that is all I see. If I connect with the VGA cable, it works fine. So, is it possible it is just the HDMI cable (which IS cheap in this case - fire sale at Blockbuster), and not related to the video driver update at all?

    Read the article

  • xkb layouts not working (in KDE?) after upgrade from Ubuntu 9.10 to 10.04

    - by Alan
    I customised my keyboard layout in 9.10 by editing the appropriate /usr/share/X11/xkb/symbols/ file. After upgrading to 10.04 I noticed it had overwritten all my modifications, so I recovered the layout and overwrote the symbol file's base entry. Sadly KDE (and, presumably, the entire OS) seems to ignore the files altogether. The help files don't mention anything about modifying layouts anyway (and the layout switcher seems to be using setxkbmap, which uses the above path according to its man page), so I'm at a bit of a loss. Do I need to compile this into some other format somehow or how do I get it to work?

    Read the article

  • customized xkb layouts not working (in KDE?) after upgrade from Ubuntu 9.10 to 10.04

    - by Alan
    I customised my keyboard layout in 9.10 by editing the appropriate /usr/share/X11/xkb/symbols/ file. After upgrading to 10.04 I noticed it had overwritten all my modifications, so I recovered the layout and overwrote the symbol file's base entry. Sadly KDE (and, presumably, the entire OS) seems to ignore the files altogether. The help files don't mention anything about modifying layouts anyway (and the layout switcher seems to be using setxkbmap, which uses the above path according to its man page), so I'm at a bit of a loss. Do I need to compile this into some other format somehow or how do I get it to work?

    Read the article

  • VMWare Server :: VM set to 2gb RAM but vmware process shows 100mb physical, 1900mb virtual

    - by brad
    I've set up a VMWare instance to run CastIron Integration Appliance. I allocated 2gb of memory to the instance, assuming it would take this as physical memory (my server has 8gb total). When I view top however on the server, the vmware-vmx process has about 100m Resident memory and 1900m Virtual. Running CastIron it reports that the appliance often hits 50% memory usage. Does this mean I'm using 900mb of harddrive space as memory? I wanted VMWare to use 2gb of physical memory, no swap. Can anyone tell me how to achieve this? Setup Debian Lenny 5.0.3 VMWare Server 2.0.2

    Read the article

  • I deployed Flash Player via a Software Installation policy. How to upgrade?

    - by eleven81
    I have a Windows Server 2008 machine as my DC. Earlier this year I created a Software Installation GPO to deploy Adobe Flash Player plugin MSI. I assigned the policy to the computers, about half run Windows XP x86 and the other half Windows 7 x64. That all works like clockwork. When I created the Software Installation Policy, I disabled the Flash Player plugin's automatic update feature by editing the MSI in Orca. I did this because I wanted all of my machines to run the exact same version of the plugin. Now, some time has passed and a newer version of the Flash Player plugin has been released. It is time for me to push out the updated version of the plugin. I already have the new MSI, but I am lost on what to do next. I see the upgrades tab in the Software Installation GPO, but everything there reads like that would be used for add-ons to a larger master program and not for updates that are released over time. I have read that it is best to create a new Software Installation policy with the new MSI, revoke the old GPO, and assign the new GPO. I feel as though, over time, I will wind up with more revoked policies than active ones. I have also read that some people have had success by replacing the old MSI with the new MSI and simply telling the GPO to redeploy. This seems like a backdoor method that will only get me in to trouble. In short, what is the correct, best-practice, or preferred way to roll out the new version via Group Policy?

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >