Search Results

Search found 15872 results on 635 pages for 'safe remove'.

Page 134/635 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • When exactly would a DLL use a different heap than the executable?

    - by Milo
    I know that if your DLL static links against a different version of the runtime then it creates its own heap. It will also if it is instructed to make a heap. Under these circumstances, it is unsafe for the DLL to delete what the exe allocated. In what cases does this NOT apply (as in, it is safe for the DLL to delete what the exe allocated)? Is it safe if both the exe and the DLL static link against the same runtime library? Thanks

    Read the article

  • Sharing the same file between different projects

    - by selsine
    Hi Everyone, For version control we currently use Visual Source Safe and are thinking of migrating to another version control system (SVN, Mercurial, Git). Currently we use Visual Source Safe's "Shared" file feature quite heavily. This allows us to share code between design and runtimes of a single product, and between multiple products as well. For example: **Product One** - Design Login.cpp Login.h Helper.cpp Helper.h - Runtime Login.cpp Login.h Helper.cpp Helper.h **Product Two** - Design Login.cpp Login.h - Launcher Login.cpp Login.h - Runtime Login.cpp Login.h In this example Login.cpp and Login.h contain common code that all of our projects need, Helper.cpp and Helper.h is only used in Product One. In Visual Source Safe they are shared between the specific projects, which means that whenever the files are updated in one project they are updated in any project they are shared with. This is a simple example but hopefully it explains why we use the shared feature: to reduce the amount of duplicated code and ensure that when a bug is fixed all projects automatically have access to the new fixed code. After researching alternatives to Visual Source Safe it seems that most version control systems do not have the idea of shared files, instead they seem to use the idea of sub repositories. (http://mercurial.selenic.com/wiki/subrepos http://svnbook.red-bean.com/en/1.0/ch07s03.html) My question (after all of that) is about what the best practices for achieving this are using other version control systems? Should we restructure our projects so that two copies of the files do not exist and an include directory is used instead? e.g. Product One Design Login.cpp Login.h Runtime Login.cpp Login.h Common Helper.cpp Helper.h This still leaves what to do with Login.cpp and Logon.h Should the shared files be moved to their own repository and then compiled into a lib or dll? This would make bug fixing more time consuming as the lib projects would have to be edited and then rebuilt. Should we use externals or sub repositories? Should we combine our projects (i.e. runtime, design, and launcher) into one large project? Any help would be appreciated. We have the feeling that our project design has evolved based on the tools that we used and now that we are thinking of switching tools it's difficult for us to see how we can best modify our practices. Or maybe we are the only people are there doing this...? Also, we use Visual Studio for all of our stuff. Thanks.

    Read the article

  • Thread safety with heap-allocated memory

    - by incrediman
    I was reading this: http://en.wikipedia.org/wiki/Thread_safety Is the following function thread-safe? void foo(int y){ int * x = new int[50]; /*...do some stuff with the allocated memory...*/ delete x; } In the article it says that to be thread-safe you can only use variables from the stack. Really? Why? Wouldn't subsequent calls of the above function allocate memory elsewhere?

    Read the article

  • final fields and thread-safety

    - by pcjuzer
    Should it be all fields, including super-fields, of a purposively immutable java class 'final' in order to be thread-safe or is it enough to have no modifier methods? Suppose I have a POJO with non-final fields where all fields are type of some immutable class. This POJO has getters-setters, and a constructor wich sets some initial value. If I extend this POJO with knocking out modifier methods, thus making it immutable, will extension class be thread-safe?

    Read the article

  • safely reading directory contents

    - by Jack
    Is it safe to read directory entries via readdir() or scandir() while files are being created/deleted in this directory? Should I prefer one over the other? When I say "safe" I mean entries returned by these functions are valid and can be operated without crushing the program. Thanks.

    Read the article

  • Varnish default.vcl grace period

    - by Vladimir
    These are my settings for a grace period (/etc/varnish/default.vcl) sub vcl_recv { .... set req.grace = 360000s; ... } sub vcl_fetch { ... set beresp.grace = 360000s; ... } I tested Varnish using localhost and nodejs as a server. I started localhost, the site was up. Then I disconnected server and the site got disconnected in less than 2 min. It says: Error 503 Service Unavailable Service Unavailable Guru Meditation: XID: 1890127100 Varnish cache server Could you tell me what could be the problem? sub vcl_fetch { if (beresp.ttl < 120s) { ##std.log("Adjusting TTL"); set beresp.ttl = 36000s; ##120s; } # Do not cache the object if the backend application does not want us to. if (beresp.http.Cache-Control ~ "(no-cache|no-store|private|must-revalidate)") { return(hit_for_pass); } # Do not cache the object if the status is not in the 200s if (beresp.status >= 300) { # Remove the Set-Cookie header #remove beresp.http.Set-Cookie; return(hit_for_pass); } # # Everything below here should be cached # # Remove the Set-Cookie header ####remove beresp.http.Set-Cookie; # Set the grace time ## set beresp.grace = 1s; //change this to minutes in case of app shutdown set beresp.grace = 360000s; ## 10 hour - reduce if it has negative impact # Static assets - browser caches tpiphem for a long time. if (req.url ~ "\.(css|js|.js|jpg|jpeg|gif|ico|png)\??\d*$") { /* Remove Expires from backend, it's not long enough */ unset beresp.http.expires; /* Set the clients TTL on this object */ set beresp.http.cache-control = "public, max-age=31536000"; /* marker for vcl_deliver to reset Age: */ set beresp.http.magicmarker = "1"; } else { set beresp.http.Cache-Control = "private, max-age=0, must-revalidate"; set beresp.http.Pragma = "no-cache"; } if (req.url ~ "\.(css|js|min|)\??\d*$") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } ##do not duplicate these settings if (req.url ~ ".css") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } if (req.url ~ ".js") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } if (req.url ~ ".min") { set beresp.do_gzip = true; unset beresp.http.expires; set beresp.http.cache-control = "public, max-age=31536000"; set beresp.http.expires = beresp.ttl; set beresp.http.age = "0"; } ## If the request to the backend returns a code other than 200, restart the loop ## If the number of restarts reaches the value of the parameter max_restarts, ## the request will be error'ed. max_restarts defaults to 4. This prevents ## an eternal loop in the event that, e.g., the object does not exist at all. if (beresp.status != 200 && beresp.status != 403 && beresp.status != 404) { return(restart); } if (beresp.status == 302) { return(deliver); } # Never cache posts if (req.url ~ "\/post\/" || req.url ~ "\/submit\/" || req.url ~ "\/ask\/" || req.url ~ "\/add\/") { return(hit_for_pass); } ##check this setting to ensure that it does not cause issues for browsers with no gzip if (beresp.http.content-type ~ "text") { set beresp.do_gzip = true; } if (beresp.http.Set-Cookie) { return(deliver); } ##if (req.url == "/index.html") { set beresp.do_esi = true; ##} ## check if this is needed or should be used # return(deliver); the object return(deliver); } sub vcl_recv { ##avoid leeching of images call hot_link; set req.grace = 360000s; ##2m ## if one backend is down - use another if (req.restarts == 0) { set req.backend = cache_director; ##we can specify individual VMs } else if (req.restarts == 1) { set req.backend = cache_director; } ## post calls should not be cached - add cookie for these requests if using micro-caching # Pass requests that are not GET or HEAD if (req.request != "GET" && req.request != "HEAD") { return(pass); ## return(pass) goes to backend - not cache } # Don't cache the result of a redirect if (req.http.Referer ~ "redir" || req.http.Origin ~ "jumpto") { return(pass); } # Don't cache the result of a redirect (asking for logon) if (req.http.Referer ~ "post" || req.http.Referer ~ "submit" || req.http.Referer ~ "add" || req.http.Referer ~ "ask") { return(pass); } # Never cache posts - ensure that we do not use these strings in our URLs' that need to be cached if (req.url ~ "\/post\/" || req.url ~ "\/submit\/" || req.url ~ "\/ask\/" || req.url ~ "\/add\/") { return(pass); } ## if (req.http.Authorization || req.http.Cookie) { if (req.http.Authorization) { /* Not cacheable by default */ return (pass); } # Handle compression correctly. Different browsers send different # "Accept-Encoding" headers, even though they mostly all support the same # compression mechanisms. By consolidating these compression headers into # a consistent format, we can reduce the size of the cache and get more hits. # @see: http:// varnish.projects.linpro.no/wiki/FAQ/Compression if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg|ico)$") { # No point in compressing these remove req.http.Accept-Encoding; } else if (req.http.Accept-Encoding ~ "gzip") { # If the browser supports it, we'll use gzip. set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { # Next, try deflate if it is supported. set req.http.Accept-Encoding = "deflate"; } else { # Unknown algorithm. Remove it and send unencoded. unset req.http.Accept-Encoding; } } # lookup graphics, css, js & ico files in the cache if (req.url ~ "\.(png|gif|jpg|jpeg|css|.js|ico)$") { return(lookup); } ##added on 0918 - check if it causes issues with user specific content if (req.request == "GET" && req.http.cookie) { return(lookup); } # Pipe requests that are non-RFC2616 or CONNECT which is weird. if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") { ##closing connection and calling pipe return(pipe); } ##purge content via localhost only if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } ## do we need this? ## return(lookup); }

    Read the article

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

  • ./a.out termniated . Garbage output due to smashing of stack . How to remove this error ?

    - by mekasperasky
    #include <iostream> #include <fstream> #include <cstring> using namespace std; typedef unsigned long int WORD; /* Should be 32-bit = 4 bytes */ #define w 32 /* word size in bits */ #define r 12 /* number of rounds */ #define b 16 /* number of bytes in key */ #define c 4 /* number words in key */ /* c = max(1,ceil(8*b/w)) */ #define t 26 /* size of table S = 2*(r+1) words */ WORD S [t],L[c]; /* expanded key table */ WORD P = 0xb7e15163, Q = 0x9e3779b9; /* magic constants */ /* Rotation operators. x must be unsigned, to get logical right shift*/ #define ROTL(x,y) (((x)<<(y&(w-1))) | ((x)>>(w-(y&(w-1))))) #define ROTR(x,y) (((x)>>(y&(w-1))) | ((x)<<(w-(y&(w-1))))) void RC5_ENCRYPT(WORD *pt, WORD *ct) /* 2 WORD input pt/output ct */ { WORD i, A=pt[0]+S[0], B=pt[1]+S[1]; for (i=1; i<=r; i++) { A = ROTL(A^B,B)+S[2*i]; B = ROTL(B^A,A)+S[2*i+1]; } ct [0] = A ; ct [1] = B ; } void RC5_DECRYPT(WORD *ct, WORD *pt) /* 2 WORD input ct/output pt */ { WORD i, B=ct[1], A=ct[ 0]; for (i=r; i>0; i--) { B = ROTR(B-S [2*i+1],A)^A; A = ROTR(A-S [2*i],B)^B; } pt [1] = B-S [1] ;pt [0] = A-S [0]; } void RC5_SETUP(unsigned char *K) /* secret input key K 0...b-1] */ { WORD i, j, k, u=w/8, A, B, L [c]; /* Initialize L, then S, then mix key into S */ for (i=b-1,L[c-1]=0; i!=-1; i--) L[i/u] = (L[i/u]<<8)+K[ i]; for (S [0]=P,i=1; i<t; i++) S [i] = S [i-1]+Q; for (A=B=i=j=k=0; k<3*t; k++,i=(i+1)%t,j=(j+1)%c) /* 3*t > 3*c */ { A = S[i] = ROTL(S [i]+(A+B),3); B = L[j] = ROTL(L[j]+(A+B),(A+B)); } } void printword(WORD A) { WORD k; for (k=0 ;k<w; k+=8) printf("%c"); } int main() { WORD i, j, k,ptext, pt1 [2], pt2 [2], ct [2] = {0,0}; ifstream in("key1.txt"); ifstream in1("plt.txt"); ofstream out1("cpt.txt"); if(!in) { cout << "Cannot open file.\n"; return 1; } if(!in1) { cout << "Cannot open file.\n"; return 1; } unsigned char key[b]; in >> key; in1 >> pt1[0]; in1 >> pt1[0]; if (sizeof(WORD)!=4) printf("RC5 error: WORD has %d bytes.\n",sizeof(WORD)); RC5_SETUP(key); RC5_ENCRYPT(pt1,ct); printf("\n plaintext "); printword(pt1 [0]); printword(pt1 [1]); printf(" ---> ciphertext "); printword(ct [0]); printword(ct [1]); printf("\n"); RC5_SETUP(key); RC5_DECRYPT(ct,pt2); out1<<ct[0]; out1<<ct[1]; out1 <<"\n"; printf("\n plaintext "); printword(pt1 [0]); printword(pt1 [1]); return 0; } Let the plt.txt file contain 101 100 let the key be 111

    Read the article

  • Losing NSManaged Objects in my Application

    - by Wayfarer
    I've been doing quite a bit of work on a fun little iPhone app. At one point, I get a bunch of player objects from my Persistant store, and then display them on the screen. I also have the options of adding new player objects (their just custom UIButtons) and removing selected players. However, I believe I'm running into some memory management issues, in that somehow the app is not saving which "players" are being displayed. Example: I have 4 players shown, I select them all and then delete them all. They all disappear. But if I exit and then reopen the application, they all are there again. As though they had never left. So somewhere in my code, they are not "really" getting removed. MagicApp201AppDelegate *appDelegate = [[UIApplication sharedApplication] delegate]; context = [appDelegate managedObjectContext]; NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *desc = [NSEntityDescription entityForName:@"Player" inManagedObjectContext:context]; [request setEntity:desc]; NSError *error; NSMutableArray *objects = [[[context executeFetchRequest:request error:&error] mutableCopy] autorelease]; if (objects == nil) { NSLog(@"Shit man, there was an error taking out the single player object when the view did load. ", error); } int j = 0; while (j < [objects count]) { if ([[[objects objectAtIndex:j] valueForKey:@"currentMultiPlayer"] boolValue] == NO) { [objects removeObjectAtIndex:j]; j--; } else { j++; } } [self setPlayers:objects]; //This is a must, it NEEDS to work Objects are all the players playing So in this snippit (in the viewdidLoad method), I grab the players out of the persistant store, and then remove the objects I don't want (those whose boolValue is NO), and the rest are kept. This works, I'm pretty sure. I think the issue is where I remove the players. Here is that code: NSLog(@"Remove players"); /** For each selected player: Unselect them (remove them from SelectedPlayers) Remove the button from the view Remove the button object from the array Remove the player from Players */ NSLog(@"Debugging Removal: %d", [selectedPlayers count]); for (int i=0; i < [selectedPlayers count]; i++) { NSManagedObject *rPlayer = [selectedPlayers objectAtIndex:i]; [rPlayer setValue:[NSNumber numberWithBool:NO] forKey:@"currentMultiPlayer"]; int index = [players indexOfObjectIdenticalTo:rPlayer]; //this is the index we need for (int j = (index + 1); j < [players count]; j++) { UIButton *tempButton = [playerButtons objectAtIndex:j]; tempButton.tag--; } NSError *error; if ([context hasChanges] && ![context save:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } UIButton *aButton = [playerButtons objectAtIndex:index]; [players removeObjectAtIndex:index]; [aButton removeFromSuperview]; [playerButtons removeObjectAtIndex:index]; } [selectedPlayers removeAllObjects]; NSError *error; if ([context hasChanges] && ![context save:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } NSLog(@"About to refresh YES"); [self refreshAllPlayers:YES]; The big part in the second code snippet is I set them to NO for currentMultiPlayer. NO NO NO NO NO, they should NOT come back when the view does load, NEVER ever ever. Not until I say so. No other relevant part of the code sets that to YES. Which makes me think... perhaps they aren't being saved. Perhaps that doesn't save, perhaps those objects aren't being managed anymore, and so they don't get saved in. Is there a lifetime (metaphorically) of NSManaged object? The Players array is the same I set in the "viewDidLoad" method, and SelectedPlayers holds players that are selected, references to NSManagedObjects. Does it have something to do with Removing them from the array? I'm so confused, some insight would be greatly appreciated!!

    Read the article

  • How do I add "Press any key to boot from usb" when installing Windows from a flash drive? (Grub4dos question / how to remove a bootloader)

    - by Vincent
    Hi there! I've been struggling with this problem for a while now and finially decided to ask for help. Let me first explain what the main purpose of the app is: to provide the a very easy to use way of backing up files, after which I format the drive and start Windows 7 setup. I do this by booting WinPE, which runs a script to detect Windows installations and then opens a file browser. After the file browser is closed, the script continues and formats the drive that contains the Windows installation, and starts an unattended Windows 7 install. Now here is the problem: When you start Windows setup or WinPE from a dvd, you get a nice option to "Press any key to boot from DVD". This is to prevent the computer from booting the DVD when the first phase of the installation is complete and the computer reboots. However, when booting from a flash drive, Windows does not provide this option: it simply boots the flash drive every reboot. To replicate the "press any key" function, I installed Grub4Dos, which works great. It provides a small menu, the first standard item being "Continue installation", the second being "start installation". After quite a lot of tweaking, I got everything working: Start installation starts WinPE, which in turn starts the Windows installation. At first reboot, the Grub4Dos menu comes up, counts 5 seconds and boots the second stage of the installation. Here, I am greeted with the error: "Windows setup could not configure windows to run on this computer's hardware." When I boot into WinPE the normal way (put the bootmgr on the stick root) and change my bios to boot from the primary hdd after first reboot, I don't get this error. I've been looking around, and the only thing I could find was that the BIOS automatically names the boot device hd0, and that Windows can only be run / installed to hd 0. I'm not sure if this is the problem. I read about remapping to solve this problem, but to do that you have to know the phisical location of the hard drive and partition, like hd(0,1). I want this flash drive to work on any PC, regardless of where the OS is installed, so that's not really a possibility. A possible fix I thought of is removing the bootloader from the flash drive when I'm in WinPE. That way, when the pc reboots the BIOS will not see the flash drive as a boot drive and instead boot the primary hdd. I have yet to find a way to do this. Thank you for reading my question, and if you have any suggestion, please do.

    Read the article

  • How to add or remove a value inside a table cell on selection / de-selection of checkbox of that row, trying to submit the value via Jquery?

    - by Raul
    Here is the table: <%= form_tag '', :id => "costs" do %> <table class="table table-bordered" id="service_cost"> <% @services.each do |service| %> <tbody> <tr> <td><%= check_box_tag :open_service, {}, false, :class => 'checkable' %></td> <td><%= service.phone %></td> <td><%= service.internet %></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td><%= service.house_keeping %> </td> <td>0.0 </td> <td><%= service.laundry %></td> <td><%= text_field_tag "service_cost", service.total, :class => "input-small" %></td> </tr> <% end %> when the form gets submitted, the javascript gets into action: $("#costs").submit(function(){ formData=$("#costs").serializeArray(); processFormData(formData) return false; }); This ensures form submission on selecting the checkbox: $('.checkable').live('change', function() { $(this).parents('form:first').submit(); }); But, what I am looking for is adding or removing a cell value based on checkbox selection/de-selection and submitting it, kindly suggest a way to do it.

    Read the article

  • BSOD Code 16, artifacts all over the place (gtx 260)

    - by belinea
    I have a following, quite dated rig E8400 Core 2 Duo cpu Intel Dragontail Peak DP35DP motherboard on Intel Bearlake P35 chipset 4GB ram Geforce 260gtx Corsair 650W PSU Windows 7 64 bit The following things has happened in the last few days. I first decided to update my Nvidia drivers to the latest version. That was 4 days ago. PC worked fine for 2 days and I was able to play few games as well without any problems. Then 2 days ago a first crash happened while playing the new XCOM game. BSOD code 16. Just the blue screen, no artifacts. PC rebooted and worked well again, I continued playing this game for another 2 hours and went to sleep. Next evening I tried to play some BF3 multiplayer (use to play on LOW settings). Approx. 10 minutes into the game red/pink-ish artifacts appeared on the screen and game quit to desktop. Restarted the game and another 3-4 minutes afterwards another crash to desktop but this time followed shortly by BSOD Code 16. From that moment I started to seeing artifacts on random startups, including Windows loading screen and the BIOS itself. Windows would still load but soon enough it would BSOD on a simple task like opening a Internet browser. Today I get tons of artifacts (little small red dashes all over the screen) on BIOS, loading screen, normal Windows mode as well as safe mode. I suspect it wouldn't be drivers but I tried removing and sweeping them entirely in the safe mode. PC would still start with artifacts all over the place but would load the normal mode, just without the driver, in the default lowest resolution. As soon as proper Nvidia drivers are installed though and PC rebooted, Windows doesn't load at all as BSOD now appears on loading screen. However, again, if I go to safe mode and remove drivers, normal mode launches fine. So obviously crash happens only on high resolution. I opened my machine this morning and gave it a proper cleaning even though it wasn't heavily dusted. It didn't help and number of artifacts seems to increase with every PC restart. I write these words in safe mode, which works, but I have to look through all the red dots and dashes. I don't have built in GPU chipset so I can't try removing my Geforce card nor can I borrow A GPU from anyone else. What are my options? I was looking into getting a completely new rig around Christmas so I'm not freaking out about this. If everything points to hardware issue I may simply decide to get the new machine earlier and don't bother with fixing this one. However it would be great to learn more if this is indeed situation that has slim chances of getting sorted. I realize BSOD Code 16 is rather popular topic online but every story seems a bit different and there can be number of issues with it. Hence a new thread.

    Read the article

  • Inside the Concurrent Collections: ConcurrentDictionary

    - by Simon Cooper
    Using locks to implement a thread-safe collection is rather like using a sledgehammer - unsubtle, easy to understand, and tends to make any other tool redundant. Unlike the previous two collections I looked at, ConcurrentStack and ConcurrentQueue, ConcurrentDictionary uses locks quite heavily. However, it is careful to wield locks only where necessary to ensure that concurrency is maximised. This will, by necessity, be a higher-level look than my other posts in this series, as there is quite a lot of code and logic in ConcurrentDictionary. Therefore, I do recommend that you have ConcurrentDictionary open in a decompiler to have a look at all the details that I skip over. The problem with locks There's several things to bear in mind when using locks, as encapsulated by the lock keyword in C# and the System.Threading.Monitor class in .NET (if you're unsure as to what lock does in C#, I briefly covered it in my first post in the series): Locks block threads The most obvious problem is that threads waiting on a lock can't do any work at all. No preparatory work, no 'optimistic' work like in ConcurrentQueue and ConcurrentStack, nothing. It sits there, waiting to be unblocked. This is bad if you're trying to maximise concurrency. Locks are slow Whereas most of the methods on the Interlocked class can be compiled down to a single CPU instruction, ensuring atomicity at the hardware level, taking out a lock requires some heavy lifting by the CLR and the operating system. There's quite a bit of work required to take out a lock, block other threads, and wake them up again. If locks are used heavily, this impacts performance. Deadlocks When using locks there's always the possibility of a deadlock - two threads, each holding a lock, each trying to aquire the other's lock. Fortunately, this can be avoided with careful programming and structured lock-taking, as we'll see. So, it's important to minimise where locks are used to maximise the concurrency and performance of the collection. Implementation As you might expect, ConcurrentDictionary is similar in basic implementation to the non-concurrent Dictionary, which I studied in a previous post. I'll be using some concepts introduced there, so I recommend you have a quick read of it. So, if you were implementing a thread-safe dictionary, what would you do? The naive implementation is to simply have a single lock around all methods accessing the dictionary. This would work, but doesn't allow much concurrency. Fortunately, the bucketing used by Dictionary allows a simple but effective improvement to this - one lock per bucket. This allows different threads modifying different buckets to do so in parallel. Any thread making changes to the contents of a bucket takes the lock for that bucket, ensuring those changes are thread-safe. The method that maps each bucket to a lock is the GetBucketAndLockNo method: private void GetBucketAndLockNo( int hashcode, out int bucketNo, out int lockNo, int bucketCount) { // the bucket number is the hashcode (without the initial sign bit) // modulo the number of buckets bucketNo = (hashcode & 0x7fffffff) % bucketCount; // and the lock number is the bucket number modulo the number of locks lockNo = bucketNo % m_locks.Length; } However, this does require some changes to how the buckets are implemented. The 'implicit' linked list within a single backing array used by the non-concurrent Dictionary adds a dependency between separate buckets, as every bucket uses the same backing array. Instead, ConcurrentDictionary uses a strict linked list on each bucket: This ensures that each bucket is entirely separate from all other buckets; adding or removing an item from a bucket is independent to any changes to other buckets. Modifying the dictionary All the operations on the dictionary follow the same basic pattern: void AlterBucket(TKey key, ...) { int bucketNo, lockNo; 1: GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, m_buckets.Length); 2: lock (m_locks[lockNo]) { 3: Node headNode = m_buckets[bucketNo]; 4: Mutate the node linked list as appropriate } } For example, when adding another entry to the dictionary, you would iterate through the linked list to check whether the key exists already, and add the new entry as the head node. When removing items, you would find the entry to remove (if it exists), and remove the node from the linked list. Adding, updating, and removing items all follow this pattern. Performance issues There is a problem we have to address at this point. If the number of buckets in the dictionary is fixed in the constructor, then the performance will degrade from O(1) to O(n) when a large number of items are added to the dictionary. As more and more items get added to the linked lists in each bucket, the lookup operations will spend most of their time traversing a linear linked list. To fix this, the buckets array has to be resized once the number of items in each bucket has gone over a certain limit. (In ConcurrentDictionary this limit is when the size of the largest bucket is greater than the number of buckets for each lock. This check is done at the end of the TryAddInternal method.) Resizing the bucket array and re-hashing everything affects every bucket in the collection. Therefore, this operation needs to take out every lock in the collection. Taking out mutiple locks at once inevitably summons the spectre of the deadlock; two threads each hold a lock, and each trying to acquire the other lock. How can we eliminate this? Simple - ensure that threads never try to 'swap' locks in this fashion. When taking out multiple locks, always take them out in the same order, and always take out all the locks you need before starting to release them. In ConcurrentDictionary, this is controlled by the AcquireLocks, AcquireAllLocks and ReleaseLocks methods. Locks are always taken out and released in the order they are in the m_locks array, and locks are all released right at the end of the method in a finally block. At this point, it's worth pointing out that the locks array is never re-assigned, even when the buckets array is increased in size. The number of locks is fixed in the constructor by the concurrencyLevel parameter. This simplifies programming the locks; you don't have to check if the locks array has changed or been re-assigned before taking out a lock object. And you can be sure that when a thread takes out a lock, another thread isn't going to re-assign the lock array. This would create a new series of lock objects, thus allowing another thread to ignore the existing locks (and any threads controlling them), breaking thread-safety. Consequences of growing the array Just because we're using locks doesn't mean that race conditions aren't a problem. We can see this by looking at the GrowTable method. The operation of this method can be boiled down to: private void GrowTable(Node[] buckets) { try { 1: Acquire first lock in the locks array // this causes any other thread trying to take out // all the locks to block because the first lock in the array // is always the one taken out first // check if another thread has already resized the buckets array // while we were waiting to acquire the first lock 2: if (buckets != m_buckets) return; 3: Calculate the new size of the backing array 4: Node[] array = new array[size]; 5: Acquire all the remaining locks 6: Re-hash the contents of the existing buckets into array 7: m_buckets = array; } finally { 8: Release all locks } } As you can see, there's already a check for a race condition at step 2, for the case when the GrowTable method is called twice in quick succession on two separate threads. One will successfully resize the buckets array (blocking the second in the meantime), when the second thread is unblocked it'll see that the array has already been resized & exit without doing anything. There is another case we need to consider; looking back at the AlterBucket method above, consider the following situation: Thread 1 calls AlterBucket; step 1 is executed to get the bucket and lock numbers. Thread 2 calls GrowTable and executes steps 1-5; thread 1 is blocked when it tries to take out the lock in step 2. Thread 2 re-hashes everything, re-assigns the buckets array, and releases all the locks (steps 6-8). Thread 1 is unblocked and continues executing, but the calculated bucket and lock numbers are no longer valid. Between calculating the correct bucket and lock number and taking out the lock, another thread has changed where everything is. Not exactly thread-safe. Well, a similar problem was solved in ConcurrentStack and ConcurrentQueue by storing a local copy of the state, doing the necessary calculations, then checking if that state is still valid. We can use a similar idea here: void AlterBucket(TKey key, ...) { while (true) { Node[] buckets = m_buckets; int bucketNo, lockNo; GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, buckets.Length); lock (m_locks[lockNo]) { // if the state has changed, go back to the start if (buckets != m_buckets) continue; Node headNode = m_buckets[bucketNo]; Mutate the node linked list as appropriate } break; } } TryGetValue and GetEnumerator And so, finally, we get onto TryGetValue and GetEnumerator. I've left these to the end because, well, they don't actually use any locks. How can this be? Whenever you change a bucket, you need to take out the corresponding lock, yes? Indeed you do. However, it is important to note that TryGetValue and GetEnumerator don't actually change anything. Just as immutable objects are, by definition, thread-safe, read-only operations don't need to take out a lock because they don't change anything. All lockless methods can happily iterate through the buckets and linked lists without worrying about locking anything. However, this does put restrictions on how the other methods operate. Because there could be another thread in the middle of reading the dictionary at any time (even if a lock is taken out), the dictionary has to be in a valid state at all times. Every change to state has to be made visible to other threads in a single atomic operation (all relevant variables are marked volatile to help with this). This restriction ensures that whatever the reading threads are doing, they never read the dictionary in an invalid state (eg items that should be in the collection temporarily removed from the linked list, or reading a node that has had it's key & value removed before the node itself has been removed from the linked list). Fortunately, all the operations needed to change the dictionary can be done in that way. Bucket resizes are made visible when the new array is assigned back to the m_buckets variable. Any additions or modifications to a node are done by creating a new node, then splicing it into the existing list using a single variable assignment. Node removals are simply done by re-assigning the node's m_next pointer. Because the dictionary can be changed by another thread during execution of the lockless methods, the GetEnumerator method is liable to return dirty reads - changes made to the dictionary after GetEnumerator was called, but before the enumeration got to that point in the dictionary. It's worth listing at this point which methods are lockless, and which take out all the locks in the dictionary to ensure they get a consistent view of the dictionary: Lockless: TryGetValue GetEnumerator The indexer getter ContainsKey Takes out every lock (lockfull?): Count IsEmpty Keys Values CopyTo ToArray Concurrent principles That covers the overall implementation of ConcurrentDictionary. I haven't even begun to scratch the surface of this sophisticated collection. That I leave to you. However, we've looked at enough to be able to extract some useful principles for concurrent programming: Partitioning When using locks, the work is partitioned into independant chunks, each with its own lock. Each partition can then be modified concurrently to other partitions. Ordered lock-taking When a method does need to control the entire collection, locks are taken and released in a fixed order to prevent deadlocks. Lockless reads Read operations that don't care about dirty reads don't take out any lock; the rest of the collection is implemented so that any reading thread always has a consistent view of the collection. That leads us to the final collection in this little series - ConcurrentBag. Lacking a non-concurrent analogy, it is quite different to any other collection in the class libraries. Prepare your thinking hats!

    Read the article

  • How can I do Ubuntu Lucid Desktop installation with HTTP preseed?

    - by netvope
    Installation media: ubuntu-10.04-desktop-i386.iso I tried a lot of different boot parameters, but either the installer ignored the preseed configuration, or it boot itself directly as LiveCD. An example of the boot parameters I've tried: auto url=http://mydomain.com/path/preseed.cfg boot=casper only-ubiquity initrd=/casper/initrd.lz quiet splash -- If I remove only-ubiquity, it boots as a LiveCD. If I remove boot=casper, it won't boot. If I add vga=normal locale=en_US console-setup/layoutcode=us console-setup/ask_detect=false interface=auto, it still can't do automatic install. If I remove auto, it's the same. What is the correct boot parameters for launching such an installation? From the apache log of the server hosting preseed.cfg, I see that the installer has no problems fetching the preseed file. My preseed file is almost identical to the one at https://help.ubuntu.com/10.04/installation-guide/example-preseed.txt. Moreover, I have run debconf-set-selections -c preseed.cfg to ensure that the preseed file is correct.

    Read the article

  • How can I do Ubuntu Lucid Desktop installation with HTTP preseed file and desktop installation CD?

    - by netvope
    Installation media: ubuntu-10.04-desktop-i386.iso I tried a lot of different boot parameters, but either the installer ignored the preseed configuration, or it boot itself directly as LiveCD. An example of the boot parameters I've tried: auto url=http://mydomain.com/path/preseed.cfg boot=casper only-ubiquity initrd=/casper/initrd.lz quiet splash -- If I remove only-ubiquity, it boots as a LiveCD. If I remove boot=casper, it won't boot. If I add vga=normal locale=en_US console-setup/layoutcode=us console-setup/ask_detect=false interface=auto, it still can't do automatic install. If I remove auto, it's the same. What am I missing? From the apache log of the server hosting preseed.cfg, I see that the installer has no problems fetching the preseed file. My preseed file is almost identical to the one at https://help.ubuntu.com/10.04/installation-guide/example-preseed.txt. Moreover, I have run debconf-set-selections -c preseed.cfg to ensure that the preseed file is correct. Any ideas?

    Read the article

  • ubuntu-10.04-desktop-i386 does not work with HTTP preseed?

    - by netvope
    Installation media: ubuntu-10.04-desktop-i386.iso I tried a lot of different boot parameters, but either the installer ignored the preseed configuration, or it boot itself directly as LiveCD. An example of the boot parameters I've tried: auto url=http://mydomain.com/path/preseed.cfg boot=casper only-ubiquity initrd=/casper/initrd.lz quiet splash -- If I remove only-ubiquity, it boots as a LiveCD. If I remove boot=casper, it won't boot. If I add vga=normal locale=en_US console-setup/layoutcode=us console-setup/ask_detect=false interface=auto, it still can't do automatic install. If I remove auto, it's the same. What is the correct boot parameters for launching such an installation? From the apache log of the server hosting preseed.cfg, I see that the installer has no problems fetching the preseed file. My preseed file is almost identical to the one at https://help.ubuntu.com/10.04/installation-guide/example-preseed.txt. Moreover, I have run debconf-set-selections -c preseed.cfg to ensure that the preseed file is correct.

    Read the article

  • Remote logging for multiple Apache virtual hosts using syslog-ng

    - by James
    I'm running a couple Apache web servers that each have 4-8 separate virtual hosts on each of them. I'm trying to setup a dedicated log server that stores each virtual host access and errors logs in a separate directory for that virtual host. For example on the logging server, /var/log/remove/10.0.0.2/virtualhost1 contains access_log and error_log /var/log/remove/10.0.0.2/virtualhost2 contains access_log and error_log /var/log/remove/10.0.0.3/virtualhost3 contains access_log and error_log and so on... Right now I have it split up by host but I can't figure out how to do it additionally by virtual host. Here are the relevant lines from the logging server's syslog-ng.conf source r_src { tcp(ip("0.0.0.0") port(5140)); }; destination r_all { file("/opt/splunk/logs/$HOST"); }; log { source(r_src); destination(r_all); }; Any help would be appreciated. Thanks!

    Read the article

  • Set up linux box for hosting a-z

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP/MySQL: To upgrade PHP and MySQL to the latest versions, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! Add IUS repository to our package manager cd /tmp wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. # list all the packages in the IUS repository; use this to find PHP/MySQL version and libraries you want to install Remove old version of PHP and install newer version from IUS rpm -qa | grep php # to list all of the installed php packages we want to remove yum shell # open an interactive yum shell remove php-common php-mysql php-cli #remove installed PHP components install php53 php53-mysql php53-cli php53-common #add packages you want transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) Upgrade MySQL from IUS repository /etc/init.d/mysqld stop rpm -qa | grep mysql # to see installed mysql packages yum shell remove mysql mysql-server #remove installed MySQL components install mysql51 mysql51-server mysql51-devel transaction solve #important!! checks for dependencies transaction run #important!! does the actual installation of packages. [control+d] #exit yum shell service mysqld start mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project Upgrade instructions courtesy of IUS wiki: http://wiki.iuscommunity.org/Doc/ClientUsageGuide Install rssh (restricted shell) to provide scp and sftp access, without allowing ssh login cd /tmp wget http://dag.wieers.com/rpm/packages/rssh/rssh-2.3.2-1.2.el5.rf.x86_64.rpm rpm -ivh rssh-2.3.2-1.2.el5.rf.x86_64.rpm useradd -m -d /home/dev -s /usr/bin/rssh dev passwd dev Edit /etc/rssh.conf to grant access to SFTP to rssh users. vi /etc/rssh.conf Uncomment or add: allowscp allowsftp This allows me to connect to the machine via SFTP protocol in Transmit (my FTP program of choice; I'm sure it's similar with other FTP apps). rssh instructions appropriated (with appreciation!) from http://www.cyberciti.biz/tips/linux-unix-restrict-shell-access-with-rssh.html Set up virtual interfaces ifconfig eth1:1 192.168.1.3 up #start up the virtual interface cd /etc/sysconfig/network-scripts/ cp ifcfg-eth1 ifcfg-eth1:1 #copy default script and match name to our virtual interface vi ifcfg-eth1:1 #modify eth1:1 script #ifcfg-eth1:1 | modify so it looks like this: DEVICE=eth1:1 IPADDR=192.168.1.3 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes NAME=eth1:1 Add more Virtual interfaces as needed by repeating. Because of the ONBOOT=yes line in the ifcfg-eth1:1 file, this interface will be brought up when the system boots, or the network starts/restarts. service network restart Shutting down interface eth0: [ OK ] Shutting down interface eth1: [ OK ] Shutting down loopback interface: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ] Bringing up interface eth1: [ OK ] ping 192.168.1.3 64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.105 ms And this is where I'm at. I will keep editing this as I make progress. Any tips on how to Configure virtual interfaces/ip based virtual hosts for SSL, setting up a CA, or anything else would be appreciated.

    Read the article

  • Configure Iptables to allow a PHP-app accessing a port-nr

    - by Camran
    I have a php-application which connects to another app called Solr (database search engine). I can via this php app add/remove documents (records) from the Solr index. However, the Solr security is low, and anybody with the right port nr can access Solr and remove documents (records). I wonder, is it possible to ONLY allow my own php-app to have access to Solr somehow? Prefferably via Iptables. I am thinking I can only allow my own servers IP to that port, and it would solve my problem, because PHP is a server-side code. But I am not sure. About the Php-app: The website is a classifieds website, and when users wants to add or remove classifieds, they do so through a php app, which is this one. The app has a function which connects to solr and updates the database (index). I appreciate detailed answers... Thanks

    Read the article

  • Strip X Window System from OpenRD (Fedora core 8)

    - by disown
    I just got myself an OpenRD Client embedded box with an old Fedora 8 on it. Runs like a charm, only problem is that the default install has X on it, which I would like to remove in order to free up some space. I found a similar post here on the same topic, but yum groupremove doesn't work in my case, yum grouplist returns nothing, so I presume that no groups are defined. I don't know if this is because fedora 8 didn't have any groups, or because the OpenRD build is special in some way. In any case, I would like some tips on which package(s) to remove in order to remove an as big chunk of X as possible. Thanks in advance.

    Read the article

  • How to restrict HTTP Methods

    - by hemalshah
    I want to restrict HTTP methods to those required by the MOSS07 application(s) using IIS6. I am not able to find a solution after googling. Can any one help. Update this is what was written in the document IIS6 should be used to restrict HTTP methods to those required by the MOSS07 application(s). I also searched some books and saw something curious in O'Reilly's Sharepoint 2007 by James Pyles and others. There is no real suppported way to use HTTP POST and HTTP GET because of the web.config settings and the static definition of the WSDL. In the web.config <protocols> <remove name="HttpGet"> <remove name="HttpPost"> <remove name="HttpPostLocalHost"> <add name="Documentation"> </protocols> If we do this in the Web.Config file, would it solve the problem?

    Read the article

  • Cropping a PDF File's Margin During Printing

    - by JavaMan
    I'm using the free Acrobat Reader to print out some pdf documents having very large top/bottom/left/right margins. I want to remove the margins (which are wasting too much space and making the fonts too small). I used to use Acrobat (the paid version having edit features) to crop the src pdf file manually. But since it is an old version it does not support new pdf format and I don't want to upgrade for such a simple use. Is there any free way to crop/remove unwanted white margins from the printed pdf? I am thinking to print the pdf files to a PDF Printer like the Bullzip PDF Printer and enlarge the output file manually so as to remove any white margin. But there does not seem to be such a feature in Bullzip PDF Printer. Is there any other virtual printer software that can be used for this purpose?

    Read the article

  • How to get HTTP preseed to work correctly on Ubuntu 10.04 LTS (Lucid)?

    - by netvope
    Installation media: ubuntu-10.04-desktop-i386.iso I tried a lot of different boot parameters, but either the installer ignored the preseed configuration, or it boot itself directly as LiveCD. An example of the boot parameters I've tried: auto url=http://mydomain.com/path/preseed.cfg boot=casper only-ubiquity initrd=/casper/initrd.lz quiet splash -- If I remove only-ubiquity, it boots as a LiveCD. If I remove boot=casper, it won't boot. If I add vga=normal locale=en_US console-setup/layoutcode=us console-setup/ask_detect=false interface=auto, it still can't do automatic install. If I remove auto, it's the same. What is the correct boot parameters for launching such an installation? From the apache log of the server hosting preseed.cfg, I see that the installer has no problems fetching the preseed file. My preseed file is almost identical to the one at https://help.ubuntu.com/10.04/installation-guide/example-preseed.txt. Moreover, I have run debconf-set-selections -c preseed.cfg to ensure that the preseed file is correct.

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >