Search Results

Search found 4625 results on 185 pages for 'split tunnel'.

Page 164/185 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Timed selector never performed

    - by sudo rm -rf
    I've added an observer for my method: [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(closeViewAfterUpdating) name:@"labelUpdatedShouldReturn" object:nil]; Then my relevant methods: -(void)closeViewAfterUpdating; { NSLog(@"Part 1 called"); [self performSelector:@selector(closeViewAfterUpdating2) withObject:nil afterDelay:2.0]; } -(void)closeViewAfterUpdating2; { NSLog(@"Part 2 called"); [self dismissModalViewControllerAnimated:YES]; } The only reason why I've split this method into two parts is so that I can have a delay before the method is fired. The problem is, the second method is never called. My NSLog output shows Part 1 called, but it never fires part 2. Any ideas? EDIT: I'm calling the notification from a background thread, does that make a difference by any chance? Here's how I'm creating my background thread: [NSThread detachNewThreadSelector:@selector(getWeather) toTarget:self withObject:nil]; and in getWeather I have: [[NSNotificationCenter defaultCenter] postNotificationName:@"updateZipLabel" object:textfield.text]; Also, calling: [self performSelector:@selector(closeViewAfterUpdating2) withObject:nil]; does work. EDITx2: I fixed it. Just needed to post the notification in my main thread and it worked just fine.

    Read the article

  • Best way to make sure I get all available options array/loop

    - by jaz872
    OK, here is my problem. I'm looking for the best way to loop through a bunch of options to make sure that I hit all available options. Some detail. I've created a feature that allows a client to build images that are basically other images layered on top of each other. These other images are split up into different groups. They have links on the side of the image that they can click to scroll through all the different images to view them. I'm now making an automated process that is going to run the function that changes the image when a user clicks one of the links. I need to make sure that every possible combo of the different images is hit during this process. So I have an array with the number of options for each group. The current array is [3, 9, 3, 3] My question is what is the best way to loop through this to make sure that all possible options will be shown? I apologize if this seems simple for someone out there, but I'm just having trouble wrapping my head around it. Hopefully if that person is out there, they can give a helping hand :)

    Read the article

  • Google sheet dynamic WHERE clause for query() statement

    - by jason_cant_code
    I have a data table like so: a 1 a 2 b 3 b 4 c 5 c 6 c 7 I want to pull items out of this table by dynamically telling it what letters to pull. My current formula is: =query(A1:B7,"select * where A ='" & D1 & "'"). D1 being the cell I wish to modify to modify the query. I want to be able input into D1 -- a, a,b, a,b,c and have the query work. I know it would involve or statements in the query, but haven't figured out how to make the formula dynamic. I am looking for a general solution for this pattern: a -- A = 'a' a,b -- A = 'a' or A = 'b' a,b,c -- A = 'a' or A = 'b' or A='c' Or any other solution that solves the problem. Edit: So far I have =ArrayFormula(CONCATENATE("A='"&split(D3,",")&"' or ")) this gives A='a' or A='b' or A='c' or for a,b,c. can't figure out how to remove the last or.

    Read the article

  • Save Cookies, Then Open Link in New Tab

    - by speedplane
    I have some javascript code that saves a cookie. However, if after saving the cookie, the user opens a new tab, it appears that the cookie is not saved. The new tab is on the same domain. Here is my cookie setting/getting code: function setCookie(c_name,value,exdays) { var exdate=new Date(); exdate.setDate(exdate.getDate() + exdays); var c_value=escape(value) + ((exdays==null) ? "" : "; expires="+exdate.toUTCString()); document.cookie=c_name + "=" + c_value; } function getCookie(c_name) { var i,x,y,ARRcookies=document.cookie.split(";"); for (i=0;i<ARRcookies.length;i++) { x=ARRcookies[i].substr(0,ARRcookies[i].indexOf("=")); y=ARRcookies[i].substr(ARRcookies[i].indexOf("=")+1); x=x.replace(/^\s+|\s+$/g,""); if (x==c_name) { return unescape(y); } } } If some javascript calls setCookie('mycookie', 1) and then the user clicks on a link where the _target is set to _blank, the cookie does not load in the new tab. So getCookie('mycookie') will not return 1. What is the problem here?

    Read the article

  • Allowing user to type only one "."

    - by Tartar
    I am trying to implement a simple javascript-html calculator. What i want to do is,typing only one '.' by the user. How can i control this ? Here is the code that i tried. I can already find the number of '.' but i'am confused now also this replaceAll function is not replacing '.' with empty string. String.prototype.replaceAll = function(search, replace) { //if replace is null, return original string otherwise it will //replace search string with 'undefined'. if(!replace) return this; return this.replace(new RegExp('[' + search + ']', 'g'), replace); }; function calculate(){ var value = document.calculator.text.value; var valueArray = value.split(""); var arrayLenght = valueArray.length; var character = "."; var charCount = 0; for(i=0;i<arrayLenght;i++){ if (valueArray[i]===character) { charCount += 1; } } if(charCount>1){ var newValue=value.replaceAll(".",""); alert(newValue); } }

    Read the article

  • Remove .img css from prepended div

    - by Ivan Schrecklich
    OK as the title says I've got a div which is prepended and dynamically loaded. The problem I have is that I can't split the css on this one as it parses also whole strings. The usage is like that: I've got a @username somewhere in the string. If the user hovers it a div with informations will get prepended to the current username. Now there is the problem that I've allowed users to post images in this text also. As the autolinker is flexible it doesn't know the image sizes and restrictions and I want to leave it like that! So I define css classes which look like that: .minpost img{ max-height: 30px; max-width: 30px; } Of course I don't need to mention that this attribute is also inherited by the prepended div. And that I don't want to! nifty little tricks like !important won't work for me. So I am asking you guys. If you need further informations just ask?!

    Read the article

  • Replace dual-XP installs with single-XP install and repartition drive?

    - by caeious
    Hello, The Current Situation I have a hard drive that currently is split up like so: Primary Partition C: 9.77 GB NTFS Healthy (System) with XP Pro (in Polish) installed Extended Partition D: 39.82 GB NTFS Healthy (Boot) with XP Pro (in English) installed 6.30 GB Free space When I start my comuter I get a black and white Windows Boot Manager dual boot screen with 2 choices both being Microsoft Windows XP. The first choice is the English version of XP and the second choice is the Polish version of XP. Images of my Computer Management window and Dual Boot screen The Mission What I need to do is get rid of the entire extended partition (D: 39.82 GB & 6.30 free space) and just have the one primary C: drive which I assume will be somewheres around 55 GB big. So in the end I just want XP Pro in English running on my C: drive and no black and white boot screen to show up when starting up my laptop. The Question How do I go about successfully completing The Mission with out making my computer a useless pile of silicon, plastic and metal? UPDATE: So I went ahead and tried to follow Neal's suggestion but hit a wall. I got to a Windows XP Pro install screen that had the 3 following options as well as my drive data: To set up Windows XP on the selected item, press Enter To create a partition in the unpartitioned space, press C To delete the selected partition, press D 57232 MB Disk 0 at Id 0 on bus 0 on atapi [MBR] C: Partition1 [NTFS] 10001 MB ( 4642 MB free ) Unpartitioned space 6448 MB D: Partition2 [NTFS] 40774 MB ( 26225 MB free ) Unpartitioned space 8 MB I figured I would go with the first choice ((To set up Windows XP on the selected item, press Enter)) because I just wanted to set up Windows XP on C: Partition1 (which was preselected) so I pressed Enter which brought me to a screen displaying this message: You chose to install Windows XP on a partition that contains another operating system. Installing Windows XP on this partition might cause the other operating system to function improperly. CAUTION: Installing multiple operating systems on a single partition is not recommended. So this leads me to 2 new questions: How do I get rid of the Windows XP (Polish language) install on C: Partition 1 so that I can cleanly and safely install Windows XP (English language) on it? Neal, is this what you meant by me possibly having to delete the partition that the Windows XP (Polish language) install was located on? Since I have the option to delete partitions with the 3rd choice ((To delete the selected partition, press D)), should I do that on this screen or wait until I have Windows XP (English language) safely installed on C: Partition 1? I have to ask these questions because I have read that it is possibly dangerous to delete hard drive partitions. Just being cautious.

    Read the article

  • PXE-E32 TFTP Open Timeout While Attempting to PXE Boot from Windows Deployment Services

    - by bschafer
    I'm running Windows Deployment Services on Windows Server 2008 R2 on top of an ESX 4.0 box. This is the only function of this VM instance, although it had previously functioned as an AD Domain Controller. My DHCP server is running on our primary Domain Controller, which is also Server 2008 R2, but running on metal. Everything was working perfectly until we recently had our backup generator fail during a power outage, causing all of our servers and networking equipment to lose power for a period of time. When we brought all of our equipment back up, everything was working as expected except for WDS. Our network is split up into several different vlans. Now, depending on which vlan the client computer is on, it's behaving differently when attempting to PXE boot into WDS. Our servers are located on the 10.55.x.x vlan, which, due to the nature of it, has no DHCP server active in it. The first computer we plugged in happened to be in the 10.99.x.x vlan, which is supposed to be reserved for network management devices (i.e. switches), but we've been using it occasionally otherwise. That computer gave us PXE-E11 ARP Timeout errors. When we moved to a different computer on the 10.19.x.x vlan (for general purpose use), it finally gets an IP from DHCP, but it presents us with a very stumping PXE-E32 TFTP Open Timeout error. Before the power outage, it didn't matter which vlan a device was on; it would PXE boot and image just fine. I've made no changes to anything server-side. Everything is configured exactly the same way it was on my WDS and DHCP servers as before the power outage. I've tried several different computers, including different models. All of this, combined with the quirky behavior depending on the vlan, makes me think something went wrong in one or more of our switches, probably because of the power outage. Unfortunately, I'm no network guy, and I know very little about how to configure our switches properly. Is this an issue with switches, etc? If so, how can I fix it? Is there some magical option I'm not aware of? Does anybody out there have any hunches? I've pretty much exhausted my ideas. Our main switch is an HP Procurve 5406. We also have 3x HP Procurve 4208 switches. The ESX Server is an HP ProLiant DL380 G6. The WDS VM is currently using the VMXNET3 network adaptor, but we've also tried the E1000 adaptor.

    Read the article

  • Splitting an internet connection between multiple separate subnetworks

    - by pythonian4000
    Problem I have an internet connection that I want to split between four separate networks. My requirements are: I need to be able to monitor the amount of bandwidth and data being used by each network, and notify or control as necessary. The four networks should only be able to connect to the internet, not each other. My parents need to be able to operate it, so it needs a simple, preferably Windows-based GUI. Progress so far Server I have a mini-ITX server with six Gigabit ethernet ports - one for the ethernet internet connection, one for each of the four networks, and one for remote access to the server for administration. Bandwidth control I spent a long time researching solutions here. The majority of the control systems/software I found could control bandwidth usage via QOS, but could not monitor or control the amount of data being used. Eventually I found the SoftPerfect Bandwidth Manager, which has everything I need in terms of monitoring and control - per-interface quota management, usage statistics, a web interface for checking usage, and email notifications when quotas are exceeded. It is also Windows-based and has a simple GUI. Internet sharing This is where I am having issues. I am currently using Windows XP Pro SP2 for the server (yes, I know this is far from ideal, but it's the only spare Windows OS I currently have). I can't use the built-in Internet Connection Sharing for several reasons: The upstream internet router has an IP of 192.168.0.1 which ICS clashes with, and I cannot change the router settings. ICS can only share an internet connection with a single interface, but I have four. I have tried bridging the four network cards, but then the Bandwidth Manager cannot see the four individual interfaces - it only sees the bridge. I have tried setting up Dual DHCP DNS server (and am having issues getting DHCP offers to be received by clients), but that would still require gateway software of some sort, which I have been unable to find. My current attempt is to use OpenVPN, with a server for the internet NIC and a separate client for each of the four networks. My thought is that I could bridge the OpenVPN TAP devices to each NIC, meaning that the Bandwidth Manager would control traffic from the bridge instead of the interface. I have not made much progress here though - I've never used OpenVPN before. Questions Is there a Windows software package that does everything I need? (Unlikely, I know) Is there a Windows software package that will share internet between multiple NICs without bridging? Are either of my about attempts feasible? Would it help to have a newer/server version of Windows? Is there a non-Windows alternative that is easy to use?

    Read the article

  • What are some fast methods for navigating to frequently used folders in Windows 7?

    - by fostandy
    (This is a followup question from my previous question.) In windows XP I used to be able to quickly navigate to frequently used folders by making use of the 'Favorites' menu item and the hotkey behaviour. In certain conditions it could be set up so that getting to a particular folder was as easy as alt-a x (and without a file explorer window open it was as fast as win-e alt-a x). I am struggling to get anywhere near this speed in Windows 7 and would like to solicit advice from others regarding fast folder navigation to see if I am missing any methods. My current way to navigate quickly is basically move hand to mouse move cursor to navigation pane/pain. scroll all the way to the top (because normally I the panel is focused on whatever deep directory structure I am already in). sift through my 50+ favorites to get the one I want, or click a link to a folder that contains further links in some sort of 'pseudo-tree' functionality. select it. This is slower than my previous method by upwards of an order of magnitude. There are a couple of things I've contemplated: add expandable folders, not just direct links, to the favorites menu. add expandable folders, not just direct links, to the start menu. add links of my favorite folders to a submenu of the start menu so that they come up when I search them. They do but this still rather cumbersome started using 7stacks - url here (I cannot link the url directly due to lack of reputation but http://www.alastria.com/index.php?p=software-7s). This is about the closest I've gotten to some sort of compact, customizeable, easy to access, tree based navigation structure. How do you power users quickly navigate to your favorite folders? Are there keyboard shortcuts I am missing? Can someone recommend other apps or addon or extensions that can achieve this sort of functionality? The Current solution (thanks to the answers below) I am going to use is a combination of Autohotkey and 7stacks - autohotkey to launch 7stacks, 7stacks with the 'menu' stack type for fast, key-enabled navigation to folders organised in a tree structure. This solves about 90% of the issue, the only issues are (note that these are really minor, I am really splitting hairs more than anything here) Can't use this for existing folder navigation (ie already have a explorer window open, want to go to another directory) A bit more cumbersome to add/remove entries to compared to xp favorites. A little slower than xp favorites. Whatever. I'm happy. Thanks guys. I think the answer is a split to John T and Kelbizzle - I've elected to give the answer to John T and +1 to Kelbizzle as I had already mentioned 7stacks.

    Read the article

  • Apache Mod_rewrite rule working on one server, but not another

    - by Mason
    I am using mod_jk and mod_rewrite on httpd 2.2.15. I have a rule.... RewriteCond %{REQUEST_URI} !^/video/play\.xhtml.* RewriteRule ^/video/(.*) /video/play.xhtml?vid=$1 [PT] I just want to rewrite something like /video/videoidhere to /video/play.xhtml?vid=videoidhere This works perfectly on my developer machine, but on production I get a 404 (generated by Jboss, not Apache). here is the tail of access.log and rewrite.log on prod (broken). the rewrite.log is exactly the same on dev(working) applying pattern '^/video/(.*)' to uri '/video/46279d4daf5440b2844ec831413dcc3b' RewriteCond: input='/video/46279d4daf5440b2844ec831413dcc3b' pattern='!^/video/play\.xhtml.*' => matched rewrite '/video/46279d4daf5440b2844ec831413dcc3b' -> '/video/play.xhtml?vid=46279d4daf5440b2844ec831413dcc3b' split uri=/video/play.xhtml?vid=46279d4daf5440b2844ec831413dcc3b -> uri=/video/play.xhtml, args=vid=46279d4daf5440b2844ec831413dcc3b forcing '/video/play.xhtml' to get passed through to next API URI-to-filename handler "GET /video/46279d4daf5440b2844ec831413dcc3b HTTP/1.1" 404 420 "-" "Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.6) Gecko/20100628 Ubuntu/10.04 (lucid) Firefox/3.6.6" I can access http://www.fivi.com/video/play.xhtml?vid=46279d4daf5440b2844ec831413dcc3b but not /video/46279d4daf5440b2844ec831413dcc3b Both server are even using the EXACT same httpd.conf, and modules. I built Apache with... ./configure --prefix /usr/local/apache2.2.15 --enable-alias --enable-rewrite --enable-cache --enable-disk_cache --enable-mem_cache --enable-ssl --enable-deflate Thanks, Mason ----UPDATE---- -mod-jk.conf JkWorkersFile /usr/local/apache2.2.15/conf/workers.properties JkLogFile /var/log/mod_jk.log JkLogLevel info JkLogStampFormat "[%a %b %d %H:%M:%S %Y]" JkOptions +ForwardKeySize +ForwardURICompatUnparsed -ForwardDirectories JkRequestLogFormat "%w %V %T" JkShmFile run/jk.shm <Location /jkstatus> JkMount status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> -workers.properties worker.node1.port=8009 worker.node1.host=75.102.10.74 worker.node1.type=ajp13 worker.node1.lbfactor=20 worker.node1.ping_mode=A #As of mod_jk 1.2.27 worker.node2.port=8009 worker.node2.host=75.102.10.75 worker.node2.type=ajp13 worker.node2.lbfactor=10 worker.node2.ping_mode=A #As of mod_jk 1.2.27 worker.loadbalancer.type=lb worker.loadbalancer.balance_workers=node2,node1 worker.loadbalancer.sticky_session=True worker.status.type=status -httpd.conf ServerName www.fivi.com:80 Include /usr/local/apache2.2.15/conf/mod-jk.conf NameVirtualHost * <VirtualHost *> ServerName * DocumentRoot /usr/local/apache2/htdocs JkUnMount /* loadbalancer RedirectMatch 301 /(.*) http://www.fivi.com/$1 </VirtualHost> <VirtualHost *> ServerName www.fivi.com ServerAlias www.fivi.com images.fivi.com JkMount /* loadbalancer JkMount / loadbalancer [root@fivi conf]# /usr/local/apache2.2.15/bin/httpd -M Loaded Modules: core_module (static) authn_file_module (static) authn_default_module (static) authz_host_module (static) authz_groupfile_module (static) authz_user_module (static) authz_default_module (static) auth_basic_module (static) cache_module (static) disk_cache_module (static) mem_cache_module (static) include_module (static) filter_module (static) deflate_module (static) log_config_module (static) env_module (static) headers_module (static) setenvif_module (static) version_module (static) ssl_module (static) mpm_prefork_module (static) http_module (static) mime_module (static) status_module (static) autoindex_module (static) asis_module (static) cgi_module (static) negotiation_module (static) dir_module (static) actions_module (static) userdir_module (static) alias_module (static) rewrite_module (static) so_module (static) jk_module (shared) Syntax OK

    Read the article

  • samba 3.5 "force user" doesn't seem to be sticking

    - by myCubeIsMyCell
    After installing a new OS with newer version of samba, I'm having trouble accessing my shares. I can browse to the specific share, but only to the top level. As best I can tell from the logs, it seems the "force user" in the samba config isn't sticking beyond the initial connection. Details below. I installed a new version of CentOS on my storage server. My old CentOS (4?)install had samba version 3.0.33, new CentOS is using 3.5.10. No domain/AD involved ... just home workgroup. no real security... just some shares hidden & some defined as read-only. here's my config: [global] workgroup = WORKGROUP server string = Samba Server Version %v netbios name = luna security = share # logs split per machine log file = /var/log/samba/log.%m log level = 2 # max 50KB per log file, then rotate max log size = 50 winbind use default domain = Yes [strge] comment = please path = /storage browseable = yes read only = no force user = windowsguest force group = users guest ok = yes So... the problem I'm running into is that the 'force user' only seems to hold for the initial connection & I see all the top level folders fine. When I drill into a folder I get access denied - which appears to be due to my windows user info being sent (trys to authenticate xuser - a non-existant user to samba, so maps to nobody & fails). Here's the smb error msg: [2012/11/29 14:30:27.326195, 2] auth/auth.c:314(check_ntlm_password) check_ntlm_password: Authentication for user [xuser] -> [xuser] FAILED with error NT_STATUS_NO_SUCH_USER [2012/11/29 14:30:27.326251, 2] auth/auth.c:314(check_ntlm_password) check_ntlm_password: Authentication for user [nobody] -> [nobody] FAILED with error NT_STATUS_NO_SUCH_USER Most of the top level directories are 755, some 777. Either way, can not access them. If I do a chown -R windowsguest.users ... no change... but if I do a chmod -R to 777 or 755 they become browsable... but still can't create files (even for 777 ones). Not sure what role it plays if any... but had to recreate the user windowsguest under the new os install, uid & gid match old user. Seems the main issue as far as I can tell is that samba isn't maintaining the 'force user' - but I could be wildly off base. Client OS is win7 pro x64. Thanks for any suggestions or advice!

    Read the article

  • Puppet - Possible to use software design patterns in modules?

    - by Mike Purcell
    As I work with puppet, I find myself wanting to automate more complex setups, for example vhosts for X number of websites. As my puppet manifests get more complex I find it difficult to apply the DRY (don't repeat yourself) principle. Below is a simplified snippet of what I am after, but doesn't work because puppet throws various errors depending up whether I use classes or defines. I'd like to get some feed back from some seasoned puppetmasters on how they might approach this solution. # site.pp import 'nodes' # nodes.pp node nodes_dev { $service_env = 'dev' } node nodes_prod { $service_env = 'prod' } import 'nodes/dev' import 'nodes/prod' # nodes/dev.pp node 'service1.ownij.lan' inherits nodes_dev { httpd::vhost::package::site { 'foo': } httpd::vhost::package::site { 'bar': } } # modules/vhost/package.pp class httpd::vhost::package { class manage($port) { # More complex stuff goes here like ensuring that conf paths and uris exist # As well as log files, which is I why I want to do the work once and use many notify { $service_env: } notify { $port: } } define site { case $name { 'foo': { class 'httpd::vhost::package::manage': port => 20000 } } 'bar': { class 'httpd::vhost::package::manage': port => 20001 } } } } } That code snippet gives me a Duplicate declaration: Class[Httpd::Vhost::Package::Manage] error, and if I switch the manage class to a define, and attempt to access a global or pass in a variable common to both foo and bar, I get a Duplicate declaration: Notify[dev] error. Any suggestions how I can implement the DRY principle and still get puppet to work? -- UPDATE -- I'm still having a problem trying to ensure that some of my vhosts, which may share a parent directory, are setup correctly. Something like this: node 'service1.ownij.lan' inherits nodes_dev { httpd::vhost::package::site { 'foo_sitea': } httpd::vhost::package::site { 'foo_siteb': } httpd::vhost::package::site { 'bar': } } What I need to happen is that sitea and siteb have the same parent "foo" folder. The problem I am having is when I call a define to ensure the "foo" folder exists. Below is the site define as I have it, hopefully it will make sense what I am trying to accomplish. class httpd::vhost::package { File { owner => root, group => root, mode => 0660 } define site() { $app_parts = split($name, '[_]') $app_primary = $app_parts[0] if ($app_parts[1] == '') { $tpl_path_partial_app = "${app_primary}" $app_sub = '' } else { $tpl_path_partial_app = "${app_primary}/${app_parts[1]}" $app_sub = $app_parts[1] } include httpd::vhost::log::base httpd::vhost::log::app { $name: app_primary => $app_primary, app_sub => $app_sub } } } class httpd::vhost::log { class base { $paths = [ '/tmp', '/tmp/var', '/tmp/var/log', '/tmp/var/log/httpd', "/tmp/var/log/httpd/${service_env}" ] file { $paths: ensure => directory } } define app($app_primary, $app_sub) { $paths = [ "/tmp/var/log/httpd/${service_env}/${app_primary}", "/tmp/var/log/httpd/${service_env}/${app_primary}/${app_sub}" ] file { $paths: ensure => directory } } } The include httpd::vhost::log::base works fine, because it is "included", which means it is only implemented once, even though site is called multiple times. The error I am getting is: Duplicate declaration: File[/tmp/var/log/httpd/dev/foo]. I looked into using exec, but not sure this is the correct route, surely others have had to deal with this before and any insight is appreciated as I have been grappling with this for a few weeks. Thanks.

    Read the article

  • "Hostile" network in the company - please comment on a security setup

    - by TomTom
    I have a little specific problem here that I want (need) to solve in a satisfactory way. My company has multiple (IPv4) networks that are controlled by our router sitting in the middle. Typical smaller shop setup. There is now one additional network that has an IP Range OUTSIDE of our control, connected to the internet with another router OUTSIDE of our control. Call it a project network that is part of another companies network and combined via VPN they set up. This means: They control the router that is used for this network and They can reconfigure things so that they can access the machines in this network. The network is physically split on our end through some VLAN capable switches as it covers three locations. At one end there is the router the other company controls. I Need / want to give the machines used in this network access to my company network. In fact, it may be good to make them part of my active directory domain. The people working on those machines are part of my company. BUT - I need to do so without compromising the security of my company network from outside influence. Any sort of router integration using the externally controlled router is out by this idea So, my idea is this: We accept the IPv4 address space and network topology in this network is not under our control. We seek alternatives to integrate those machines into our company network. The 2 concepts I came up with are: Use some sort of VPN - have the machines log into VPN. Thanks to them using modern windows, this could be transparent DirectAccess. This essentially treats the other IP space not different than any restaurant network a laptop of the company goes in. Alternatively - establish IPv6 routing to this ethernet segment. But - and this is a trick - block all IPv6 packets in the switch before they hit the third party controlled router, so that even IF they turn on IPv6 on that thing (not used now, but they could do it) they would get not a single packet. The switch can nicely do that by pulling all IPv6 traffic coming to that port into a separate VLAN (based on ethernet protocol type). Anyone sees a problem with using he switch to isolate the outer from IPv6? Any security hole? It is sad we have to treat this network as hostile - would be a lot easier - but the support personnel there is of "known dubious quality" and the legal side is clear - we can not fulfill our obligations when we integrate them into our company while they are under a jurisdiction we don't have a say in.

    Read the article

  • Scoping a home dev server

    - by AbhikRK
    Hi. I’m looking to build a multi-purpose home development server. In this post, I’m looking to outline what I want from such a system, and the ‘why’s of it, to some limited extent, and finally, some rudiments of how I’m looking to go about that. I’m mostly a developer, with just about some sysadmin familiarity. So, please excuse, correct me, and suggest on any ignorance which would come across in the following ;-) It will serve the following goals to start with:- NAS (Looking at using ZFS) Source control repo e.g Git server Database e.g MySQL server Continuous Integration e.g Hudson server Other stuff as and when they come up e.g RabbitMQ etc A development sandbox to play around with new stuff I want to achieve a high uptime for 2-5 as much as possible. They should run as independent services and with minimal maintenance. (e.g TurnKey Linux appliances) I’m thinking of running them as individual Xen DomUs. Then, maybe the NAS can be a Dom0 and 6 can be another DomU. The User for this would be mostly me. I can see 2-4 being sometimes used by 2-3 users, but that would be infrequent. I’m looking for a repeatable setup. Ideally I’d like to automate this setup through Chef or Puppet or something similar. Once everything runs, I want to be able to ssh/screen/tmux into 1-6 from my laptop or any other computer on the LAN/on-the-go. My queries are:- Is putting 1-6, all of them on a single box, a good idea? If so, what kind of hardware should I be looking at, for a low-cost, low-power setup? Although not at present, but in future I might be looking at adding audio/media servers to the mix. Would that impact the answers to 1? I have an old Pentium 3 and 810e motherboard combination. Is there any way I could put it to use? I had a look at the Sheevaplug, and was wondering if I could split off the NAS on its own using that. But ruled it out preliminarily due to its reported heating issues. Is it something i should still consider? Thanks in advance

    Read the article

  • Scoping a home dev server

    - by AbhikRK
    Hi. I’m looking to build a multi-purpose home development server. In this post, I’m looking to outline what I want from such a system, and the ‘why’s of it, to some limited extent, and finally, some rudiments of how I’m looking to go about that. I’m mostly a developer, with just about some sysadmin familiarity. So, please excuse, correct me, and suggest on any ignorance which would come across in the following ;-) It will serve the following goals to start with:- NAS (Looking at using ZFS) Source control repo e.g Git server Database e.g MySQL server Continuous Integration e.g Hudson server Other stuff as and when they come up e.g RabbitMQ etc A development sandbox to play around with new stuff I want to achieve a high uptime for 2-5 as much as possible. They should run as independent services and with minimal maintenance. (e.g TurnKey Linux appliances) I’m thinking of running them as individual Xen DomUs. Then, maybe the NAS can be a Dom0 and 6 can be another DomU. The User for this would be mostly me. I can see 2-4 being sometimes used by 2-3 users, but that would be infrequent. I’m looking for a repeatable setup. Ideally I’d like to automate this setup through Chef or Puppet or something similar. Once everything runs, I want to be able to ssh/screen/tmux into 1-6 from my laptop or any other computer on the LAN/on-the-go. My queries are:- Is putting 1-6, all of them on a single box, a good idea? If so, what kind of hardware should I be looking at, for a low-cost, low-power setup? Although not at present, but in future I might be looking at adding audio/media servers to the mix. Would that impact the answers to 1? I have an old Pentium 3 and 810e motherboard combination. Is there any way I could put it to use? I had a look at the Sheevaplug, and was wondering if I could split off the NAS on its own using that. But ruled it out preliminarily due to its reported heating issues. Is it something i should still consider? Thanks in advance Have posted this question previously on SuperUser but no responses yet. So was wondering if this is a more apt forum for this.

    Read the article

  • System Requirements of a write-heavy applications serving hundreds of requests per second

    - by Rolando Cruz
    NOTE: I am a self-taught PHP developer who has little to none experience managing web and database servers. I am about to write a web-based attendance system for a very large userbase. I expect around 1000 to 1500 users logged-in at the same time making at least 1 request every 10 seconds or so for a span of 30 minutes a day, 3 times a week. So it's more or less 100 requests per second, or at the very worst 1000 requests in a second (average of 16 concurrent requests? But it could be higher given the short timeframe that users will make these requests. crosses fingers to avoid 100 concurrent requests). I expect two types of transactions, a local (not referring to a local network) and a foreign transaction. local transactions basically download userdata in their locality and cache it for 1 - 2 weeks. Attendance equests will probably be two numeric strings only: userid and eventid. foreign transactions are for attendance of those do not belong in the current locality. This will pass in the following data instead: (numeric) locality_id, (string) full_name. Both requests are done in Ajax so no HTML data included, only JSON. Both type of requests expect at the very least a single numeric response from the server. I think there will be a 50-50 split on the frequency of local and foreign transactions, but there's only a few bytes of difference anyways in the sizes of these transactions. As of this moment the userid may only reach 6 digits and eventid are 4 to 5-digit integers too. I expect my users table to have at least 400k rows, and the event table to have as many as 10k rows, a locality table with at least 1500 rows, and my main attendance table to increase by 400k rows (based on the number of users in the users table) a day for 3 days a week (1.2M rows a week). For me, this sounds big. But is this really that big? Or can this be handled by a single server (not sure about the server specs yet since I'll probably avail of a VPS from ServInt or others)? I tried to read on multiple server setups Heatbeat, DRBD, master-slave setups. But I wonder if they're really necessary. the users table will add around 500 1k rows a week. If this can't be handled by a single server, then if I am to choose a MySQL replication topology, what would be the best setup for this case? Sorry, if I sound vague or the question is too wide. I just don't know what to ask or what do you want to know at this point.

    Read the article

  • Mercurial browser on Windows 2003 takes several refreshes before displaying repositories

    - by Tim Murphy
    When attempt to browse my Mercurial repositories it usually takes several refreshes before the repository list is displayed. The configuration is as follows: Windows Server 2003 (Dedicated machine hosted by http://www.server4you.com/. Site has anonymous password protection with self-signed SSL. Mercurial 1.5.3 Python 2.6.5 Python for Windows 32 extensions 214 py2.6 isapi-wsgi 0.4.2 The repositories are being served via ISAPI using the standard hgwebdir_wspi.py file (copy to follow). Other problems with the repository server: Before doing a clone/push/etc I have to browse the repositories first otherwise hg on my local machine can not locate the site. I have one a repository with a large changeset that after a minute or so throw error "abort: error: An existing connection was forcibly closed by the remote host". Will be asking another question for this problem. What can I do to start tracking down this problem? hgwebdir_wsgi.py # Configuration file location hgweb_config = r'C:\Public\Mercurial\WebSite\hgweb.config' # Global settings for IIS path translation path_strip = 0 # Strip this many path elements off (when using url rewrite) path_prefix = 0 # This many path elements are prefixes (depends on the # virtual path of the IIS application). import sys # Adjust python path if this is not a system-wide install #sys.path.insert(0, r'c:\path\to\python\lib') # Enable tracing. Run 'python -m win32traceutil' to debug if hasattr(sys, 'isapidllhandle'): import win32traceutil # To serve pages in local charset instead of UTF-8, remove the two lines below import os os.environ['HGENCODING'] = 'UTF-8' import isapi_wsgi from mercurial import demandimport; demandimport.enable() from mercurial.hgweb.hgwebdir_mod import hgwebdir # Example tweak: Replace isapi_wsgi's handler to provide better error message # Other stuff could also be done here, like logging errors etc. class WsgiHandler(isapi_wsgi.IsapiWsgiHandler): error_status = '500 Internal Server Error' # less silly error message isapi_wsgi.IsapiWsgiHandler = WsgiHandler # Only create the hgwebdir instance once application = hgwebdir(hgweb_config) def handler(environ, start_response): # Translate IIS's weird URLs url = environ['SCRIPT_NAME'] + environ['PATH_INFO'] paths = url[1:].split('/')[path_strip:] script_name = '/' + '/'.join(paths[:path_prefix]) path_info = '/'.join(paths[path_prefix:]) if path_info: path_info = '/' + path_info environ['SCRIPT_NAME'] = script_name environ['PATH_INFO'] = path_info return application(environ, start_response) def __ExtensionFactory__(): return isapi_wsgi.ISAPISimpleHandler(handler) if __name__=='__main__': from isapi.install import * params = ISAPIParameters() HandleCommandLine(params) hgweb.config [paths] / = C:\Public\Mercurial\Repositories\* [web] allow_archive = bz2 gz zip ; Allows archive downloads. allow_push = ######## ; Users that are allowed to push.

    Read the article

  • High I/O latency with software RAID, LUKS encrypted and LVM partitioned KVM setup

    - by aef
    I found out a performance problems with a Mumble server, which I described in a previous question are caused by an I/O latency problem of unknown origin. As I have no idea what is causing this and how to further debug it, I'm asking for your ideas on the topic. I'm running a Hetzner EX4S root server as KVM hypervisor. The server is running Debian Wheezy Beta 4 and KVM virtualisation is utilized through LibVirt. The server has two different 3TB hard drives as one of the hard drives was replaced after S.M.A.R.T. errors were reported. The first hard disk is a Seagate Barracuda XT ST33000651AS (512 bytes logical, 4096 bytes physical sector size), the other one a Seagate Barracuda 7200.14 (AF) ST3000DM001-9YN166 (512 bytes logical and physical sector size). There are two Linux software RAID1 devices. One for the unencrypted boot partition and one as container for the encrypted rest, using both hard drives. Inside the latter RAID device lies an AES encrypted LUKS container. Inside the LUKS container there is a LVM physical volume. The hypervisor's VFS is split on three logical volumes on the described LVM physical volume: one for /, one for /home and one for swap. Here is a diagram of the block device configuration stack: sda (Physical HDD) - md0 (RAID1) - md1 (RAID1) sdb (Physical HDD) - md0 (RAID1) - md1 (RAID1) md0 (Boot RAID) - ext4 (/boot) md1 (Data RAID) - LUKS container - LVM Physical volume - LVM volume hypervisor-root - LVM volume hypervisor-home - LVM volume hypervisor-swap - … (Virtual machine volumes) The guest systems (virtual machines) are mostly running Debian Wheezy Beta 4 too. We have one additional Ubuntu Precise instance. They get their block devices from the LVM physical volume, too. The volumes are accessed through Virtio drivers in native writethrough mode. The IO scheduler (elevator) on both the hypervisor and the guest system is set to deadline instead of the default cfs as that happened to be the most performant setup according to our bonnie++ test series. The I/O latency problem is experienced not only inside the guest systems but is also affecting services running on the hypervisor system itself. The setup seems complex, but I'm sure that not the basic structure causes the latency problems, as my previous server ran four years with almost the same basic setup, without any of the performance problems. On the old setup the following things were different: Debian Lenny was the OS for both hypervisor and almost all guests Xen software virtualisation (therefore no Virtio, also) no LibVirt management Different hard drives, each 1.5TB in size (one of them was a Seagate Barracuda 7200.11 ST31500341AS, the other one I can't tell anymore) We had no IPv6 connectivity Neither in the hypervisor nor in guests we had noticable I/O latency problems According the the datasheets, the current hard drives and the one of the old machine have an average latency of 4.12ms.

    Read the article

  • Can't get DHCPd to assign IPs to unknown clients

    - by Jakobud
    I'm using Webmin to admin our DHCPd server. But I'm having a hard time getting it to assign IP addresses to unknown clients. The only way I can get it to assign an IP is to make sure a host is added to DHCPd as a host so that it gets a static-lease IP assigned to it. I thought "Allow Unknown Clients" was the key, but it still isn't assigning IPs to unknown clients. I have a pool setup so that the unknown clients should get an IP between 10.20.0.200 - 10.20.0.249. Here is the config file. What am I missing here? allow unknown-clients; # Primary DHCP server config authoritative; ddns-update-style none; failover peer "dhcp-failover" { primary; address 10.20.0.30; port 647; peer address 10.20.0.25; peer port 647; max-response-delay 60; max-unacked-updates 10; load balance max seconds 3; mclt 3600; split 128; } subnet 10.20.0.0 netmask 255.255.255.0 { allow unknown-clients; option subnet-mask 255.255.255.0; option broadcast-address 10.20.0.255; option routers 10.20.0.100; option domain-name "ourdomain.com"; option domain-name-servers 192.168.10.20; default-lease-time 86400; max-lease-time 86400; option ntp-servers 192.168.10.20; option time-offset -25200; pool { allow unknown-clients; failover peer "dhcp-failover"; max-lease-time 86400; range 10.20.0.200 10.20.0.249; deny dynamic bootp clients; } host Server-myserver { option host-name "whatever.ourdomain.com"; hardware ethernet 00:89:D4:35:4F:13; fixed-address 10.20.0.23; } }

    Read the article

  • HOW TO Convert DVD to iPad(Also converts to iPods)?

    - by goodm
    DVD to iPad Converter (Also converts to iPods) DVD to iPad Converter is the easiest-to-use and fastest DVD to iPad converter for Apple iPad movie and iPad video. It can convert almost any kind of DVD to iPad movie or iPad video format. It is also a powerful DVD to iPad converter with a conversion speed that is much faster than real-time. With this converter, you can use your iPad as a portable DVD player and enjoy your favorite DVDs on your iPad. http://www.softseeking.com/prodail.aspx?proid=83" Features of this DVD to iPad Converter Three Running Modes --In Direct Mode, you can directly click the DVD menu to select the movie you want to rip. This mode is very easy for ripping DVD movies. --In Batch Mode, you can select the DVD titles or chapters you want to rip via a checkbox list. This mode is very easy for batch ripping music DVDs, MTV DVDs and episodic DVDs. --In 1-Click Mode, you just need one click to open a DVD, after which the rest of the task will be done automatically. This is a “designed-for-dummies” mode. Input Types You can convert almost any kind of DVD format to iPad. Output Splitting You can split your output video by DVD chapters and titles. Fully supports MTV DVDs and episodic DVDs. File Size and Quality Adjustment You can customize the output file size and corresponding video quality. Flexible Output Profiles You can easily customize the various video settings such as brightness, bit rate, etc. Language Selection for Subtitles and Audio Track In Direct mode and in Batch mode, you can select the subtitle and audio track language. (In 1-Click mode, the default language is chosen automatically). Video Crop You can crop your video to 16:9, 4:3, full screen, etc. Video Resize You can resize your video. For example, you can set it to "Keep aspect ratio" or "Stretch to fit screen." Other Converts DVD to MP3 audio. Supports Dolby, DTS Surround audio track. Converts to the latest iPhone, 4th generation iPad nano, nano chromatic, 2nd generation iPad touch, and Apple TV.

    Read the article

  • Can Vagrant point to a directory of Puppet manifests for execution?

    - by SeligkeitIstInGott
    I am using Vagrant to jump start some initial Puppet config and am confused on how to include/run multiple manifests (other than just site.pp) in the puppet execution workflow without making the extra manifests into modules and including them that way. In the puppet manifests directory that I point Vagrant to (see below) I have two manifests that I want executed: site.pp and hierasetup.pp. config.vm.provision "puppet" do |puppet| puppet.manifests_path = "puppet_files/manifests" puppet.module_path = "puppet_files/modules" puppet.manifest_file = "site.pp" puppet.options = "--verbose --debug" end Currently I am having site.pp be the manifest that calls hierasetup.pp. My site.pp looks like this: File { owner => 'root', group => 'root', mode => '0644', } import "hierasetup.pp" include jboss But I get this error about the deprecation of "import": Warning: The use of 'import' is deprecated at /tmp/vagrant-puppet-1/manifests/site.pp:33. See http://links.puppetlabs.com/puppet-import-deprecation (at grammar.ra:610:in `_reduce_190') According to the referenced URL under "Things to try instead" it says "To keep your node definitions in separate files, specify a directory as your main manifest". Further this puppet doc on main manifests says: "Recommended: If you’re using the main manifest heavily instead of relying on an ENC, consider changing the manifest setting to $confdir/manifests. This lets you split up your top-level code into multiple files while avoiding the import keyword. It will also match the behavior of simple environments." It appears that Puppet can reference an entire directory instead of just a specific manifest file, such that I would expect that Vagrant would make a provision for this and allow me to drop the "puppet.manifest_file = "site.pp" line and point to the parent directory instead in which all the *.pp files there will be executed. However removing that line in Vagrant merely generates a complaint about an expected "default.pp" in its stead: puppet provisioner: * The configured Puppet manifest is missing. Please specify a path to an existing manifest: /some/path/puppet_files/manifests/default.pp So: Firstly, do I understand the "new" (non-import) way of calling multiple manifests correctly, in that a directory is to be pointed to in which all the *.pp files inside it will be executed? And secondly, has Vagrant "caught up" with this new change to accommodate the referencing of directories in conjunction with Puppet's deprecation of "import"? Update: Thanks to Shane the issue with #2 (Vagrant's code not being caught up to allow pointing to puppet manifest directories) was reported on Vagrant's GitHub issue tracker site and has since been patched: https://github.com/mitchellh/vagrant/issues/4169

    Read the article

  • How can I change how OS X's 'say' command pronounces a word?

    - by jwhitlock
    OS X's say command is useful for some tasks (such as Skype's 'notify me when a contact comes online), but it is pronouncing some names incorrectly. Is there a way to teach say to pronounce a word differently? For example, try: say "Hi, Joel Spolsky" The 'ol' sounds like 'ball' rather than 'old'. I'd like to add an exception that say "Pronounce Spolsky like this", rather than try to teach new linguistic rules. I bet there is a way since it can pronounce "iphone" as Apple wants. Update - After some research, here's what I've learned: Text-to-speech is split between turning the text to phonemes, and then the phonemes are turned into audio using a voice. Changing the voice doesn't effect the phonemes. The Speech Synthesis Manager has some functions for turning text to phonemes, and a method for registering a speech dictionary that will add new text-phoneme maps. However, Apple's speech dictionary must be in a binary form - I didn't find any plist XML. Using dtrace while running say, I found some interesting files opened in /System/Library/PrivateFrameworks/SpeechDictionary.framework/Resources. This is probably the speech dictionary, but they are all binary, except for Homophones, which is XML. Adding entries to Homophones does nothing - it is probably used in speech-to-text. They are also code signed by Apple - changing them may prevent some programs from working. PrefixDictionary CartNames CartLite SymbolDictionary Homophones There are ways to add text versions of application interface elements so VoiceOver works, a lot of which a developer gets for free, but there are tricky bits. The standard here appears to be to use a phonetic spelling as needed. My guesses are: say is a light layer of code on top of the Speech Synthesis Manager. It would be easy for the Apple devs to add a command line option to take the path to a speech dictionary plist for alternate phoneme mapping, but they didn't. It may be a useful open-source project to write a better say. Skype probably uses Speech Synthesis Manager directly, leaving no hooks to change the way my friend's names are pronounced, other than spelling them phonetically, which is silly. The easiest way to make a command line version of say is how JRobert suggested. Here's my quick implementation, using Doug Harris's spelling suggestion: #!/bin/sh echo $@ | tr '[A-Z]' '[a-z]' | sed "s/spolsky/spowlsky/g" | /usr/bin/say Finally, some fun command line stuff: # Apple is weird sqlite3 /System/Library/PrivateFrameworks/SpeechDictionary.framework/Resources/Tuples .dump # Get too much information about what files are being opened sudo dtrace -n 'syscall::open*:entry { printf("%s %s",execname,copyinstr(arg0)); }' # Just fun say -v bad "Joel Spolsky Spolsky Spolsky Spolsky Spolsky, Joel Spolsky Spolsky Spolsky Spolsky Spolsky" echo "scale=1000; 4*a(1)" | bc -l | say

    Read the article

  • Any way to recover ext4 filesystems from a deleted LVM logical volume?

    - by Vegar Nilsen
    The other day I had a proper brain fart moment while expanding a disk on a Linux guest under Vmware. I stretched the Vmware disk file to the desired size and then I did what I usually do on Linux guests without LVM: I deleted the LVM partition and recreated it, starting in the same spot as the old one, but extended to the new size of the disk. (Which will be followed by fsck and resize2fs.) And then I realized that LVM doesn't behave the same way as ext2/3/4 on raw partitions... After restoring the Linux guest from the most recent backup (taken only five hours earlier, luckily) I'm now curious on how I could have recovered from the following scenario. It's after all virtually guaranteed that I'll be a dumb ass in the future as well. Virtual Linux guest with one disk, partitioned into one /boot (primary) partition (/dev/sda1) of 256MB, and the rest in a logical, extended partition (/dev/sda5). /dev/sda5 is then setup as a physical volume with pvcreate, and one volume group (vgroup00) created on top of it with the usual vgcreate command. vgroup00 is then split into two logical volumes root and swap, which are used for / and swap, logically. / is an ext4 file system. Since I had backups of the broken guest I was able to recreate the volume group with vgcfgrestore from the backup LVM setup found under /etc/lvm/backup, with the same UUID for the physical volume and all that. After running this I had two logical volumes with the same size as earlier, with 4GB free space where I had stretched the disk. However, when I tried to run "fsck /dev/mapper/vgroup00-root" it complained about a broken superblock. I tried to locate backup superblocks by running "mke2fs -n /dev/mapper/vgroup00-root" but none of those worked either. Then I tried to run TestDisk but when I asked it to find superblocks it only gave an error about not being able to open the file system due to a broken file system. So, with the default allocation policy for LVM2 in Ubuntu Server 10.04 64-bit, is it possible that the logical volumes are allocated from the end of the volume group? That would definitely explain why the restored logical volumes didn't contain the expected data. Could I have recovered by recreating /dev/sda5 with exactly the same size and disk position as earlier? Are there any other tools I could have used to find and recover the file system? (And clearly, the question is not whether or not I should have done this in a different way from the start, I know that. This is a question about what to do when shit has already hit the fan.)

    Read the article

  • Some domain names not resolving on local network

    - by Solignis
    I am not really sure where to start with this one... I have a small network setup with some linux servers (Ubuntu 11.04 Server). 2 servers are running BIND 9 (NS01, NS02), they are configured as master and slave respectively. 1 server is running Zimbra ZCS 7.1.1 (MX01), it has a private BIND 9 server running to achieve a split DNS configuration. This DNS server does not interact with the other two, it forwards queries it can resolve to the other 2 that is it. No zone transfers. Zimbra is hosting 3 domains at the moment, solignis.local, solignis.com, campbellsurvey.net. The problem From with in my network I cannot connect to mail.campbellsurvey.net. When I mean I cannot connect, I mean if I open firefox and type https://mail.campbellsurvey.net I go nowhere, the address is supposed to connect to my Zimbra webmail. But it goes nowhere, the odd thing is if I try the same task from outside of the network it brings the website up like normal. If I try to create an account in thunderbird to connect to the same server using IMAP4 or POP3 I get an error saying that thunderbird cannot find the domain name. Even the zimbra client fails too. It is like from with in my own walls that campbellsurvey.net does not exist. But if step outside I can get it work with no problem at all. I had thought maybe the problem was with the DNS server (BIND 9), so just to eliminate it as a possibility I configured a windows server I use for VMware VCenter as a DNS server to see what would happen. The result was the same, its like something is preventing connections to those domains, but I have checked various firewalls and such. I checked port forwards, etc. So I am running out of ideas I know this is not a lot of information to work from and I can give more details about certain things as needed. I am just trying to figure out what could be going wrong. Any help you could offer would be much appreciated.

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >