Search Results

Search found 72369 results on 2895 pages for 'max file descriptors'.

Page 233/2895 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • Networking Mac OS X 10.6 and Vista via ethernet?

    - by Moshe
    How can I make Vista home premium access OS X hard drive? and the other way around? I'd like to transfer files via direct ethernet. Plugging in an ethernet cable makes both computers recognize a network, but not the other device. Each firewall is turned off, but no luck. Edit: I don't see Windows Sharing in the Service Column.

    Read the article

  • How to mount LUKS partition securely on server

    - by Ency
    I'm curious if it is possible to mount a partition encrypted by cryptsetup with LUKS securely and automatically on Ubuntu 10.0.4 LTS. For example, if I use the key for the encrypted partition, than that key has to be presented on a device that is not encrypted and if someone steals my disk they'll be able to find the key and decrypt the partition. Is there any safe way to mount an encrypted partition? If not, does anything exist to do what I want?

    Read the article

  • Managing hosts and iptables in scalable architecture

    - by hakunin
    Let's say I have a load balancer in front of 3 app servers. Let's say I also have these services available at certain IPs: Postgres server Redis server ElasticSearch server Memcached server 1 Memcached server 2 Memcached server 3 So that's 6 nodes at 6 different IP addresses. Naturally, every one of my 3 app servers needs to talk to these 6 servers above. Then, to make it a bit funkier, I also have 3 worker servers. And each worker also talks to the above 6 servers, but thankfully workers and apps never need to talk to each other. Now's the kicker. Everything is on Digital Ocean VPS. What that means is: you have no private network, no private IPs. You only have separate, random IP address on each machine. You can't mask them or anything. So in order to build a secure environment I would have to configure some iptables. For example: Open app servers be accessed by load balancer server Open redis, ES, PG, and each memcached servers to be accessed by each app's IP and each worker's IP This means that every time I add an app or worker I have to also reconfigure iptables in those above 6 servers to welcome the new app or worker. Is there a way to simplify this type of setup? I was thinking — what if there was a gateway machine between apps/workers and the above 6 machines. This way all the interaction would always happen via the gateway server, and when I add a new app or worker I wouldn't need to teach the 6 servers to let it in. If I went this route, then I'd hope a small 512mb server could handle that perhaps, and there wouldn't be almost any overhead. Or would there? Please help with best way to handle this situation. I would appreciate an answer as concrete as possible. I don't think this is too specific, because this general architecture is very common, and Digital Ocean is becoming increasingly popular. A concrete solution here would be much appreciated by many.

    Read the article

  • Can't see any entries on "File Types" tab on Windows 7's Desktop Search

    - by Mick
    For some reason, Windows Desktop Search (included with Windows 7.0) now no longer shows any file types on the "Advanced Options" dialog box. Any ideas on how to fix this? WDS claims to be indexing...but I'm not sure what it's actually indexing right now. I should also add that when I try to add a file type, nothing shows up or is ever actually added. If I login as a different user on the same system, everything appears normally.

    Read the article

  • Adobe software does not save to network share

    - by Bart van Heukelom
    I'm running Windows 7 inside virtualbox on a linux host. I have shared my linux filesystem so it's accesible in Windows under \vboxsvr\sharename. I've mounted this share on S:. For most software, it works fine. Adobe software like Photoshop has problems with it though. I can read from S: just fine, but if I try to save something it gives me the message "There are no more files". How can I make it able to write to the share?

    Read the article

  • What's the easiest way to duplicate a portion of a directory structure onto an external drive?

    - by Jon Cage
    I'm trying to move a large chunk of data from one of our servers onto an external drive for delivery to Amazon glacier storage. To do that, I'd like to copy a chunk of the server, preserving the directory structure. I.e. move this: \\MyServer\Some\Longwinded\Path\TheDataIWantToCopy \\MyServer\Some\Longwinded\Path\TheDataIWantToCopy\First bit of data\DataFile1.dat to this: D:\ D:\First bit of data\DataFile1.dat

    Read the article

  • Rails 2 and Ngnix: https pages can't load css or js (but will load graphics)

    - by Max Williams
    ADMISSION: i've posted this same question on stackoverflow, before realising it's probabaly better suited to superuser, but it kind of depends on the answer: If it turns out to be a problem in my nginx config, it's definitely superuser. If it turns out to be a problem in my Rails config (or code) then it's arguably stackoverflow. I'm adding some https pages to my rails site. In order to test it locally, i'm running my site under one mongrel_rails instance (on 3000) and nginx. I've managed to get my nginx config to the point where i can actually go to the https pages, and they load. Except, the javascript and css files all fail to load: looking in the Network tab in chrome web tools, i can see that it is trying to load them via an https url. Eg, one of the non-working file urls is https://cmw-local.co.uk/stylesheets/cmw-logged-out.css?1383759216 I have these set up (or at least think i do) in my nginx config to redirect to the http versions of the static files. This seems to be working for graphics, but not for css and js files. If i click on this in the Network tab, it takes me to the above url, which redirects to the http version. So, the redirect seems to be working in some sense, but not when they're loaded by an https page. Like i say, i thought i had this covered in the second try_files directive in my config below, but maybe not. Can anyone see what i'm doing wrong? thanks, Max Here's my nginx config - sorry it's a bit lengthy! I think the error is likely to be in the first (ssl) server block: server { listen 443 ssl; keepalive_timeout 70; ssl_certificate /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.crt; ssl_certificate_key /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_protocols SSLv3 TLSv1; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk; root /home/max/work/charanga/elearn_container/elearn; # ensure that we serve css, js, other statics when requested # as SSL, but if the files don't exist (i.e. any non /basket controller) # then redirect to the non-https version location / { try_files $uri @non-ssl-redirect; } # securely serve everything under /basket (/basket/checkout etc) # we need general too, because of the email/username checking location ~ ^/(basket|general|cmw/account/check_username_availability) { # make sure cached copies are revalidated once they're stale add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # this serves Rails static files that exist without running # other rewrite tests try_files $uri @rails-ssl; expires 1h; } location @non-ssl-redirect { return 301 http://$host$request_uri; } location @rails-ssl { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 180; proxy_next_upstream off; proxy_pass http://127.0.0.1:3000; expires 0d; } } #upstream elrs { # server 127.0.0.1:3000; #} server { listen 80; server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk; root /home/max/work/charanga/elearn_container/elearn; access_log /home/max/work/charanga/elearn_container/elearn/log/access.log; error_log /home/max/work/charanga/elearn_container/elearn/log/error.log debug; client_max_body_size 50M; index index.html index.htm; # gzip html, css & javascript, but don't gzip javascript for pre-SP2 MSIE6 (i.e. those *without* SV1 in their user-agent string) gzip on; gzip_http_version 1.1; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; #text/html # make sure gzip does not lose large gzipped js or css files # see http://blog.leetsoft.com/2007/7/25/nginx-gzip-ssl gzip_buffers 16 8k; # Disable gzip for certain browsers. #gzip_disable "MSIE [1-6].(?!.*SV1)"; gzip_disable "MSIE [1-6]"; # blank gif like it's 1995 location = /images/blank.gif { empty_gif; } # don't serve files beginning with dots location ~ /\. { access_log off; log_not_found off; deny all; } # we don't care if these are missing location = /robots.txt { log_not_found off; } location = /favicon.ico { log_not_found off; } location ~ affiliate.xml { log_not_found off; } location ~ copyright.xml { log_not_found off; } # convert urls with multiple slashes to a single / if ($request ~ /+ ) { rewrite ^(/)+(.*) /$2 break; } # X-Accel-Redirect # Don't tie up mongrels with serving the lesson zips or exes, let Nginx do it instead location /zips { internal; root /var/www/apps/e_learning_resource/shared/assets; } location /tmp { internal; root /; } location /mnt{ root /; } # resource library thumbnails should be served as usual location ~ ^/resource_library/.*/*thumbnail.jpg$ { if (!-f $request_filename) { rewrite ^(.*)$ /images/no-thumb.png break; } expires 1m; } # don't make Rails generate the dynamic routes to the dcr and swf, we'll do it here location ~ "lesson viewer.dcr" { rewrite ^(.*)$ "/assets/players/lesson viewer.dcr" break; } # we need this rule so we don't serve the older lessonviewer when the rule below is matched location = /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf { rewrite ^(.*)$ /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf break; } location ~ v6lessonViewer.swf { rewrite ^(.*)$ /assets/players/v6lessonViewer.swf break; } location ~ lessonViewer.swf { rewrite ^(.*)$ /assets/players/lessonViewer.swf break; } location ~ lgn111.dat { empty_gif; } # try to get autocomplete school names from memcache first, then # fallback to rails when we can't location /schools/autocomplete { set $memcached_key $uri?q=$arg_q; memcached_pass 127.0.0.1:11211; default_type text/html; error_page 404 =200 @rails; # 404 not really! Hand off to rails } location / { # make sure cached copies are revalidated once they're stale add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # this serves Rails static files that exist without running other rewrite tests try_files $uri @rails; expires 1h; } location @rails { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 180; proxy_next_upstream off; proxy_pass http://127.0.0.1:3000; expires 0d; } }

    Read the article

  • What's the behaviour when opening a group of files at once?

    - by Leo King
    I selected a group of .odt files, right-clicked the first one and selected open. I would have thought they would open in alphabetical order, but as the second screenshot shows, they're not - the one with "2014-11-13" opens before "2013-10-23". And then when I select the whole group and press Enter, they open in order - and sometimes they don't. What's the behaviour when you right-click and open for a batch of files? Does it start with the one you selected and pick random files in the list to open next?

    Read the article

  • umask is being ignored on Gentoo while creating new files

    - by drcelus
    I have a server running Gentoo and hosting a drupal installation. Whenever a Drupal update is executed, the directory permissions of the updated module turn from 755 to 744 preventing the application from accessing the files. The umask is defined as 022 under /etc/profile and the Apache server is running under user and group nobody. I believe this has nothing to do with the drupal installation since if I create a directory as root, the same happens, it is created with 744 permissions, since the umask is 022 shouldn't it be created as 755 ? Why is the umask being ignored and how do I tell the server to create the directories with permission 755 ?

    Read the article

  • I lost the menu inside of Explorer (file browser) under Windows XP

    - by jfmessier
    The question says it all. Inside of the file explorer under Windows XP, I lost all menu items from the top, as well as the toolbars. And right-clicking only gives me the file/directory context menu, nothing to restore the toolbar or the menu. Help. I don't use Windows so much (I much prefer linux), so I don't know all the latest tricks and hacks. Merci :-)

    Read the article

  • I'd like to archive files from Ubuntu to Windows between two computers on a shared home network

    - by Wabbitseason
    I have an old laptop running Ubuntu 9.10 which I use as a LAMP environment for web development, and I have a comfortable, powerful desktop computer with Windows 7 installed on it. These two are connected to a home router so both can access the internet. I have been able to set up Samba so I can mount my Apache home directory so it is accessible from Windows and is mapped as a network drive. What I'd like to do is access some Windows folders from Linux so I could automatically create backups (with cron scripts) of my work to physically different locations on the Windows box. Perhaps at a later time I'd set up a local Subversion repository but I'd love to keep backups of that on the Windows drives too. Using Ubuntu's Places/Network menu I can see my desktop but I'm unable to log in to that despite having created the corrent username and password on Windows. All I can get is the following error message: "Unable to mount location. Failed to retrieve share list from server." What could be misconfigurated?

    Read the article

  • Open dialog box won't show files in libraries, but explorer will

    - by Alex
    I have the weirdest problem when trying to open or save files. When I try to get to "My Documents" through the "Libraries" side link it won't show any of my files. It will show them if I go around from the C:// drive into the user files though. I thought it was because I didn't have the right location defined for the "Libraries" shortcut, but when I use "Explorer" to open my "Libraries" it shows all the files. Any ideas?

    Read the article

  • How to search for a folder from the Windows 8 Start screen

    - by Edward Brey
    In Windows 7, if you press the Windows key and type the name of a folder, and the folder shows up among the Start menu search results. In Windows 8, if you do the same thing, no folders are listed. The Files filter shows files with matching names, but no folders. I realize that you can still search for folders from the Windows Explorer search box, but navigating that way is a bit slow and clumsy. Is there a quicker way, in particular a way to search directly from the Windows 8 Start screen?

    Read the article

  • windows xp cannot access admin share

    - by barlop
    I have 3 systems. A,B,Compx all on xp. but comps A and B have an issue with Compx. Compx has network shares I can access. I can do \\compx and get some. But I cannot access the admin share c$ \\compx\c$ gives a login prompt, and I can't get any user/pass to work. I looked at permissions but don't see an issue. Nevertheless, I will describe what I see in the permissions. In the security tab of C, I have Administrators,creator owner,everyone,bob,system,users (6 things there) "creator owner" has nothing ticked, I can't seem to change that. If I tick so they all get ticked, and click apply, 2.5min and it's completed its opration and they all untick. Though this isn't the root of the problem. Since I get the same in the share I can access. In advanced, I see those 6 things, Administrators,creator owner,everyone,bob,system,users (6 things there) all "full control" all are "this folder, subfolders and files".. except creator owner, which is just subfolders and files only I look at the properties for the share I can see. looks the same, except in security..advanced, double clicking any of them the boxes are all ticked but greyed. That's not the problem though since I can access that share. So, I don't know what the problem is.

    Read the article

  • Generating and collaborating on network map diagrams?

    - by Ian C.
    I have to turn out some network topology maps for very large networks. I'd like the format for the maps to be something other people can also edit and contribute to regardless of what software I'm using on my Mac to build them. I don't mind spending money on my end for software, but I can't require that my clients spend any money. I also can't promise my clients are also using OS X -- they could be running Linux or Windows. Is there a best software application on OS X for producing maps that I can share with other, non-OS X, users? Is there a best format for sharing topology maps that I should use when exporting the maps to disk?

    Read the article

  • find files their name is smaller or greater than a given parameter

    - by Tzury Bar Yochay
    Say that in a given directory I got tzury@x200:~/Desktop/sandbox$ ls -l total 20 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N00.P000 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N00.P001 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N00.P002 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N00.P003 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N00.P004 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N01.P000 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N01.P001 drwxr-xr-x 2 tzury tzury 4096 2011-03-09 10:19 N01.P002 I seek for a bash way to grab the list of files which their name is either grater or smaller than a given parameter, for instance: $ my_finder lt N00.P003 shall return N00.P000, N00.P001 and N00.P002 $ my_finder gt N00.P003 shall return N00.P004, N01.P000, N01.P001 and N01.P002 I was thinking of iterating over for name in $(ls) and while $name != $2 but believe there are more elegant way of doing so

    Read the article

  • Terminal command to change permissions to my 'Sites' folder and apply change to enclosed items?

    - by Ryan
    Using Snow Leopard, I'm having issues with permissions in my Sites folder. While I can navigate to localhost/~username and read any files or folders there, the same permissions have not been applied to enclosed items, and I get a 403 error trying to access them in the browser. If I select one of these enclosed folders and get info using Finder, I see the user 'Everyone' is set to 'No Access' but I can't change that (this behavior seems buggy, actually). And if I select my 'Sites' folder, the tool to 'Apply to enclosed items' is grayed out... Is there a Terminal command I can use to grant 'Read Only' access to my Sites folder, and all it contains, for the user 'Everyone'?

    Read the article

  • Simple Distributed Disconnected way to sync a directory

    - by Rory
    I want to start regularly backup my home directory on my ubuntu laptop, machine X. Suppose I have access to 2 different remote (linux) servers that I can backup to, machines A & B. Machine X will be the master, and should be synced to A and B. I could just regularly run rsync from X to A and then from X to B. That's all I need. However I'm curious if there's a more bandwidth effecient, and hence faster way to do it. Assuming X is going to be on residential style broadband lines, and since I don't want to soak up the bandwidth, I would limit the transfer from X. A and B will be on all the time, however X, will not be, so I'd also like to reduce the amount of time that X is transfering, potentially allowing A and B to spend more time transfering. Also, X won't be connected all the time. What's the best way to do this? rsync from X to A, then from A to B? Timing that right could be troublesome. I don't want to keep old files around, so if I was to rsync, then the --del option would be used. Could that mean something might get tranfered from A to B, then deleted from B, then transfered from A to B again? That's suboptimal. I know there are fancy distributed filesystems like gluster, but I think that's overkill in this case, and might not fit with the disconnected nature.

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >