Search Results

Search found 17610 results on 705 pages for 'specific'.

Page 162/705 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • Customer site is out of IP addresses, they want to go from /24 to /12 netmask... Bad idea?

    - by ewwhite
    One of my client sites called to ask me to change the subnet masks of the Linux servers I manage there while they re-IP/change the netmask of their network based on a 10.0.0.x scheme. "Can you change the server netmasks from 255.255.255.0 to 255.240.0.0?" You mean, 255.255.240.0? "No, 255.240.0.0." Are you sure you need that many IP addresses? "Yeah, we never want to run out of IP addresses." A quick check against the Subnet Cheat Sheet shows: a 255.255.255.0 netmask, a /24 provides 256 hosts. It's clear to see that an organization can exhaust that number of IP addresses. a 255.240.0.0 netmask, a /12 provides 1,048,576 hosts. This is a small < 200-user site. I doubt that they'd allocate more than 400 IP addresses. I suggested something that provides fewer hosts, like a /22 or /21 (1024 and 2048 hosts, respectively), but was unable to give a specific reason against using the /12 subnet. Is there anything this customer should be concerned about? Are there any specific reasons they shouldn't use such an incredibly large mask in their environment?

    Read the article

  • TFS 2010 Subfolder Permissions

    - by gmcalab
    I am a TFSAdmin and when I have a TFS project in which a subfolder needs specific permissions to deny some users. So, I right click on the folder in question hit Properties, and click the Security tab. There I select the Windows User or Group radio, then click Add. I put in the AD User that I want specific permissions for and hit Check Names. That resolves, so I click OK. Next, I select the permissions to Allow or Deny below in the Permissions for list. I hit OK. The permission are honored by TFS, this user no longer has PendChange permissions and I was expecting. The odd thing is, I was expecting to be able to go back into the Security tab and see that User in the list of Users and Groups and see the current state. But the list is always empty. Not sure why, but the permissions are definitely being honored, I can re-add the user with different permissions and those are also honored. Any ideas why the current users are not showing up in the Users and Groups list under the Security tab for a folder's properties? I also used the tf permission $\... to see if there were any permissions but it always returns There are no permissions set for this item (Inherit: Yes)

    Read the article

  • IIS7.5 - about app pool ID's and folder read/write access

    - by merk
    I did some searching and it looks like for each app pool, there should be an account created called IIS APPPOOL\AppPoolName - however i can see no such account when i try to modify the permissions on a folder to give that app write access. The closest I have found is the IIS_IUSRS group. Now, if i go into that group and look at the members, i see several IIS APPPOOL\PoolName members. But where are these members coming from? Why don't they show up under the users? And why can't i add a specific one to a folder? It doesn't make sense to me to add the IIS_IUSRS group to a folder since they gives every site access to the folder. To be more specific, I'm setting up wordpress and it unfortunately wants write access to the root folder. So i want to restrict it as much a possible. I was trying to figure out how to set it so that the WP root folder has write access only for the ID that the blog's app pool is running under. When i drill down into the IIS_IUSRS group, i do not see the app pool for the blog listed there. The settings for the blog's app pool are: No managed code, Classic, ApplicationPoolIdentity, and it's named 'blog' So any explanations regarding these users that are created for the app pools, and why the blog doesn't seem to belong to the iusrs group? thanks

    Read the article

  • Computer won't start after installing new video card

    - by Vercas
    So, 1 year and 340 days ago I bought a desktop computer. Since then, it has served me well. But lately, I wanted an upgrade, so I bought a new video card. I documented myself about the compatibility, and it is okay. So I opened the case, cleaned up that... dust elemental living inside of it. Unscrewed the plastic thingie on the outside to unscrew the old video card. Because of the stupid arrangement of the ports, I had to unscrew the motherboard to unplug it. So I unscrewed it, removed the old card, put in the new one, moved the motherboard back, screwed it back in, screwed the video card on the holder... thingie, and screwed the plastic thingie back in. Everything went smoothly, nothing had to be forced in/out. I connected the external power supply, closed the computer case, put the tower back in it's place and all the cables back in. When I pressed the power button, the LED turned... some color I can't distinguish. It stayed that way for a second, and then it went off. I tried a bunch of things, including permuting the external power supply arrangement (1 connection, 2 connections and no connections), with no success. And here are some of the specifications: Motherboard manufacturer: Asrock Processor: AMD Athlon II X2 3.0 GHz RAM: 2 x 2GB (had only 1 initially, bought the second plate a bit later) OLD video card: AMD Radeon HD 5450 NEW video card: Gigabyte nVidia GeForce GTX 650 GPU, 1GB GDDR5 128bit PCI-E, Dual-link DVI-Dx2 / HDMI / D-Sub Power supply: 450W + all the requirements I managed to find on the internet are met (+12V 18A or something) More specific information is stored... On that computer. If required, I may open the case again and read the stickers to find more specific information. I can also provide photos if necessary. Any ideas? Suggestions? Something? :|

    Read the article

  • Trouble Letting Users Get to Certain Sites through Squid Proxy

    - by armani
    We have Squid running on a RHEL server. We want to block users from getting to Facebook, other than a couple specific sites, like our organization's page. Unfortunately, I can't get those specific pages unblocked without allowing ALL of Facebook through. [squid.conf] # Local users: acl local_c src 192.168.0.0/16 # HTTP & HTTPS: acl Safe_ports port 80 443 # File containing blocked sites, including Facebook: acl blocked dst_dom_regex "/etc/squid/blocked_content" # Whitelist: acl whitelist url_regex "/etc/squid/whitelist" # I do know that order matters: http_access allow local_c whitelist http_access allow local_c !blocked http_access deny all [blocked_content] .porn_site.com .porn_site_2.com [...] facebook.com [whitelist] facebook.com/pages/Our-Organization/2828242522 facebook.com/OurOrganization facebook.com/media/set/ facebook.com/photo.php www.facebook.com/OurOrganization My biggest weakness is regular expressions, so I'm not 100% sure about if this is all correct. If I remove the "!blocked" part of the http_access rule, all of Facebook works. If I remove "facebook.com" from the blocked_content file, all of Facebook works. Right now, visiting facebook.com/OurOrganization gives a "The website declined to show this webpage / HTTP 403" error in Internet Explorer, and "Error 111 (net::ERR_TUNNEL_CONNECTION_FAILED): Unknown error" in Chrome. WhereGoes.com tells me the URL redirects for that URL goes like this: facebook.com/OurOrganization -- [301 Redirect] -- http://www.facebook.com/OurOrganization -- [302 Redirect] -- https://www.facebook.com/OurOrganization I tried turning up the debug traffic out of squid using "debug_options ALL,6" but I can't narrow anything down in /var/log/access.log and /var/log/cache.log. I know to issue "squid -k reconfigure" whenever I make changes to any files.

    Read the article

  • How do I enable Ubuntu Gnome system tools

    - by RussellW
    I am running Ubuntu 10 with Gnome 2.30.2. This is a VMWare workstation image provided by another company that I do not have support in this regard. I am trying to access the graphical tools for configuring the network, users, and services but the System-Administration menu does not have these options listed. The main issue I am trying to solve is to correct the problems with the gnome menu options and network settings I have the gnome-system-tools package installed, and I am unable to run command-line versions of the tools, such as nm-applet (I get no GUI if I run that command, the process is running in the background). I realize that I can perform many tasks command-line, but I would like to use the GUI for administrative functions as I am not overly proficient for all command for restarting services and setting a static IP with a specific gateway. Further, I can run gnome-nettool, but I cannot change the IP, I can only see my network card. nm-connection-editor does not show any network cards that I can configure to change the IP. Currently, I am getting a DHCP through my NAT in VMWare, I want to set it to a specific IP address though. Preferences Menu (note some missing options) ![Preferences Menu][1] Admin Menu (note some missing options) ![Admin Menu][2] Network Tools (I can view but not change IP address) ![Network Tools][3] Network Settings (Unable to change IP address) ![Network Settings][4] Network Connections (no connections listed, not even my existing ethernet NAT connection through VMWare) ![Network Connections][5] See images here that I have referenced: 1- http://i.imgur.com/kl8pP.png 2- http://i.imgur.com/K3Cjz.png 3- Iq7Xb.png 4- 7wheV.png 5- J2ad8.png

    Read the article

  • cset shield --kthread on: should I use this?

    - by lori
    I'm reading up on cpu shielding using Alex Tsariounov's cset utility here: https://rt.wiki.kernel.org/index.php/Cpuset_Management_Utility/tutorial In the tutorial I'm finding the wording around migrating kernel threads from having access to all cpus to running only in a certain cpuset a bit ambiguous The tutorial says the following: Some kernel threads can be moved into the unshielded system cpuset as well. These are the threads that are not bound to specific CPUs. If a kernel thread is bound to a specific CPU, then it is generally not a good idea to move that thread to the system set because at worst it may hang the system and at best it will slow the system down significantly. These threads are usually the IRQ threads on a real time Linux kernel, for example, and you may want to not move these kernel threads into system. If you leave them in the root cpuset, then they will have access to all CPUs. The tutorial then goes on to say: However, if your application demands an even "quieter" shield, then you can move all movable kernel threads into the unshielded system set with the following command. [zuul:cpuset-trunk]# cset shield -k on cset: --> activating kthread shielding cset: kthread shield activated, moving 70 tasks into system cpuset... [==================================================]% cset: done I am confused by this final sentence. By using the word however, it seems to suggest that you typically should not move the movable kernel threads into the unshielded system set. Is this the case, or is it safe to move kernel threads which can be moved into a cpuset, thereby preventing them from running on some cpus?

    Read the article

  • How can I prevent a DDOS attack on Amazon EC2?

    - by cwd
    One of the servers I use is hosted on the Amazon EC2 cloud. Every few months we appear to have a DDOS attack on this sever. This slows the server down incredibly. After around 30 minutes, and sometimes a reboot later, everything is back to normal. Amazon has security groups and firewall, but what else should I have in place on an EC2 server to mitigate or prevent an attack? From similar questions I've learned: Limit the rate of requests/minute (or seconds) from a particular IP address via something like IP tables (or maybe UFW?) Have enough resources to survive such an attack - or - Possibly build the web application so it is elastic / has an elastic load balancer and can quickly scale up to meet such a high demand) If using mySql, set up mySql connections so that they run sequentially so that slow queries won't bog down the system What else am I missing? I would love information about specific tools and configuration options (again, using Linux here), and/or anything that is specific to Amazon EC2. ps: Notes about monitoring for DDOS would also be welcomed - perhaps with nagios? ;)

    Read the article

  • Why can't I ping a PC on my home network?

    - by AngryHacker
    Whenever I try to ping another box on my home network, it pings the wrong ip address: C:\Users\Papa>ping macmini Pinging macmini.belkin [208.68.143.55] with 32 bytes of data: Reply from 208.68.143.55: bytes=32 time=50ms TTL=110 As you can see it always appends belkin to anything I try to ping. So I hit up ipconfig and belkin happens to be Connection-specific DNS Suffix: Wireless LAN adapter Wireless Network Connection: Connection-specific DNS Suffix . : belkin IPv4 Address. . . . . . . . . . . : 192.168.2.7 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.2.1 My setup is all DHCP, so I am not sure where belkin is coming from. I looked through all the networking stuff, as you can see below: Bottom line: how do I fix this?

    Read the article

  • Drive security settings in Windows 8 Pro

    - by Donotalo
    My PC OS is Windows 8 Pro x64. Windows 8 seems confusing. D:\ drive is supposed to be used solely by a single user, who is in Users group of the PC. The requirement is... that user will have full control of D drive. Admins will have full control of D drive. All other users can only list drive contents. No file could be opened. My account is admin account. From D drive's property Security tab, I've set the following: Allow "List folder contents" for Authenticated Users group. Allow "Full control" for SYSTEM. Allow "Full control" to specific user, who's supposed to use the drive. Allow "Full control" for Administrators group of the computer. Allow "List folder contents" for Users group. After setting this up, the specific user have full control of D drive. No other user can open any file on D drive. But though my account is an admin account, no file on D drive could be opened from my account! Why is this happening and how files can be opened from my account? Note: All accounts in this PC are local accounts.

    Read the article

  • How to start MSSQL Server with corrupt model db

    - by Jordan McGuigan
    After moving some databases around (restoring, deleting, etc) we experienced an issue creating new databases. Specifically, When trying to create a new database MSSQL Server it failed because the "The database 'model' is marked RESTORING and is in a state that does not allow recovery to be run". As some online solutions suggested, we tried to Start and Stop the MSSQL Service. Service would not restart because "Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive" (FYI: the drive has 100gb of free space). Tried restarting the machine the MSSQL Server is running on. When the server came back online, we received the same error. We have tried deleting tempdb.mdf and restoring the modeldb from the templates folder, but neither of these solved the issue. We are unable to connect to the database, even in single user mode. Many of the online solutions have us running SQL commands against the server, but we are unable to connect (even in single user mode) to the DB to run commands against the server. Specific error messages: Database 'model' cannot be opened. It is in the middle of a restore. (Microsoft SQL Server, Error: 927) The SQL Server (MSSQLSERVER) service is starting. The SQL Server (MSSQLSERVER) service could not be started. A service specific error occurred: 1814. We need the server up and running again ASAP.

    Read the article

  • Help with routing table

    - by user68752
    I have tried to find the answer to my question but not really found a clean and easy solution. I have a box (Ubuntu headless 10.04.1 server, with one Ethernet port) on LAN behind a router (running m0n0wall), that I have successfully installed a PPTP device (ppp0) on, this is working flawlessly (following this link) The thing is I want this box to route all it's internet traffic through the VPN tunnel (ppp0 device) but also being able to access the local LAN on 192.168.1.* subnet. I've succeeded a bit with this, but my problem right now is that I have port forwards (e.g. SSH) done in the m0n0wall pointing to this specific box which forces me to do "add routes" to all boxes that want to access this machine through this specific port. For instance a machine with ip xyz.xyz.xyz.xyz needs to have a static route setup in the routing table on the box to be able to access the box. This is the result of route -n xxx.xxx.137.2 192.168.1.1 255.255.255.255 UGH 0 0 0 eth0 xxx.xxx.137.2 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 yyy.yyy.0.0 192.168.1.1 255.255.0.0 UG 0 0 0 eth0 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 ppp0 Where xxx is the IPs provided from VPN server. yyy.yyy.0.0 is a net that i want to have access to the box, without this I can't access the box from outside the LAN (via port-forwards done in router software, m0n0wall) is there away round this ugly solution?

    Read the article

  • Is this DVD drive broken? Brand new, i need help convincing

    - by acidzombie24
    I am asking bc i know dell is going to give me a problem. How do i know if my DVD is broken on my laptop? i burnt 4 DL disc and they ALL failed, i called and dell suggested roxio. I used it and burnt 1 disc without error and the 2nd disc with an error. With both apps there were no 'problems' during the burning process only failed on the verification process. Some of these bad disc dont work on other PCs and one locks up windows when i click a specific file. Does that sound like a broken burner to you guys? when i called dell they told me since it can read disc properly 100% of the time and software doesnt fail in the burning process its not a broken drive _. They forward me to software support who demand a fee (i think $100) to help me fix my software. I am annoyed bc i dont want to be on the phone for them to watch me burn a dvd and since i burned it once correctly i dont want to happen to burn correctly again to have them say they solved my problem (doing nothing) and charge me refusing to refund. -edit- The errors i got were 1) the request could not be performed because of an I/O device error 2) Windows locking up when opening 1 specific file 3) Cannot copy : Data error (crc) NOTE: the file that causes the problems are random every disc

    Read the article

  • Log Problem and bash script

    - by GvWorker
    Hello Guys, I have 11 Debian servers running on rackspace cloud hosting. All running VHCS2 for hosting management. 1 server is used for application and 10 are used for only smtp. My question is regarding smtp servers. Each server hosted 1 domain. My problem is when my client use smtp there's a log created in this directory /var/log/ but within 24 hours drives are full and server refuse all smtp connections. Then I deleted the logs and ran following command to check the disk space. df -h but it shows hdd still full and server is still refusing the smtp connections. Then I ran following command to see the truth du --max-depth=1 -h It shows the truth. The real disk space used. Then I rebooted the server and now server working fine. But after few hours same situation happened. Then I created the following script. #!/bin/sh rm -fr /var/log/* rm -fr /var/log/apache2/*.log rm -fr /var/log/apache2/*.log.* rm -fr /var/log/apache2/users/* rm -fr /var/log/apache2/backup/* reboot It worked for days but after that logs are again filling the hdd. Now I want the following solutions. If anybody can help me. When I delete files from server hdd will free up without rebooting Log should be in specific range. Like a specific size of file where old data overwrite with new data

    Read the article

  • TGT validation fails, but only for one user

    - by wzzrd
    I'm seeing the weirdest thing here. I have a couple of RHEL3, 4 and 5 machines that validate user credentials through Kerberos with an Active Directoy domain controller as their KDC. This works for all of my users, save one. There is one account that is unable to log into RHEL3 Linux machines and generates the following errors there: May 31 13:53:19 mybox sshd(pam_unix)[7186]: authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.0.1 user=user May 31 13:53:20 mybox sshd[7186]: pam_krb5: TGT verification failed for `user' May 31 13:53:20 mybox sshd[7186]: pam_krb5: authentication fails for `user' Other accounts, like my own, are fine: May 31 17:25:30 mybox sshd(pam_unix)[12913]: authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.0.1 user=myuser May 31 17:25:31 mybox sshd[12913]: pam_krb5: TGT for myuser successfully verified May 31 17:25:31 mybox sshd[12913]: pam_krb5: authentication succeeds for `myuser' May 31 17:25:31 mybox sshd(pam_unix)[12915]: session opened for user myuser by (uid=0) As you can see, TGT validation fails. This only happens for this specific account, not for any other. The failing useraccount's password has been reset, I inspected both user objects in Active Directory, but I see nothing out of the ordinary. If I have the failing useraccount log into a RHEL4 or 5 box, there is not problem, so it must be RHEL3 specific, but the fact that only one account suffers from this, alludes me. Maybe someone has seen this before?

    Read the article

  • Swapping RAID sets in and out of the same controller

    - by hazymat
    This is a really simple question, and the answer is probably encoded in various wikipedia articles, however my question is reasonably specific, and I need a bulletproof answer! I'm not sure if my question pertains to hardware RAID in general, or to the specific RAID controller I'm working on. Either way it is the Dell SAS 6/iR (this is an LSI sas1068e chipset). I simply want to: remove a set of striped (RAID 0) disks from this RAID controller in a server put in another set of disks, and create a RAID 1 array (or create a new 'virtual disk', as they call it in the SAS 6/iR manual) Do stuff with the new RAID 1 array Have the option of putting back the old set of disks (the RAID 0 striped ones) I am quite sure this is possible, but I need some form of reliable, evidence-based answer as it's for a client of mine, and I need to migrate their data safely. The question: can I actually do the above? Does the RAID configuration get stored on the disks themselves, or in the hardware controller? Is any data stored in the hardware controller? If there is any chance I cannot completely restore operation of the first set of disks I removed, then I need to know about it! The manual alludes to the answer to this question (see page 45 of this document), and talks about activating an array of disks. I just need someone to confirm I can definitely do the above. See, simple question, right? :)

    Read the article

  • how can a web page change my mouse speed?

    - by Tekaholic
    I usually have many tabs open in Firefox and I haven't been able to find one specific website that causes this because I don't seem to notice it right away. I'm going to click on something on my desktop and I am lifting up the mouse several times to get across the screen. It doesn't seem to matter what program I might be using because this happens on all desktops and in Firefox, too. So I go in my settings and I turn up the mouse speed all the way and it's still not really acceptable. It doesn't matter if I click on different tabs but when I close the browser, my mouse is way too sensitive, like I'd expect at the max setting. Then I go back to Control Center and return my mouse speed and acceleration to normal. When I restart my browser, the mouse remains normal. So is there something to this before I start wasting my time hunting through my history to discover which website or sites are having this effect? ...and if it is a specific site and I locate it, what can I change to stop it's effect on my mouse besides not visiting it? I am using Linux Mint 13 on a box with an AMD Athlon processor and 2gigs of ram. I never installed another browser because everything works for me.

    Read the article

  • When buying hardware, what sites do you trust for information? [closed]

    - by Matt Dawdy
    I won't ask "what laptop should I buy" since the information is likely to change very quickly. However, I am about to buy a laptop, and I honestly don't know where to begin researching this based on my needs. I am hoping that asking about specific sites that do reviews/recommendations that this will still be on topic. I read the 6 guidelines for subjective questions and believe that this question scores favorably. I'm starting a new job in a few weeks, and they want to know what specific laptop to purchase. I'd like to get the most for their money and get a machine that will not need to be replaced in a year. When looking at a site like Dell, it's hard to get a full picture of the performance of a laptop. Does it work with a docking station, and if so, what kinds of video outs are on it? Will it work well when compiling several large projects in .Net? Has anyone had any issues with the machine getting flaky when dragging it from work to home and back all the time? etc. So, if people would enter in their preferred sites they use when researching hardware, and why they prefer that site (x is great for laptop comparisons, y is great for gaming machine reviews, etc) the I hope that this can be a question with valuable answers to others than just myself.

    Read the article

  • How two completely unrelated software can affect each other in a very strange manner?

    - by user40602
    I installed an old game on my old PC and it doesn't work; its process/exe file was listed in task manager but nothing appeared on the screen. then some time later i discovered that when some specific program was running an my pc, that game could be executed without that problem! although i am myself a power user and also a programmer, i couldn't find the reason, and don't have any good guesses about it. i just know that when i want to run that game i should have another specific and unrelated program running. i ask if anyone has any idea/guess about the possible reasons for this rare phenomenon! oh and if u ask about the details/names of those programs, i am afraid of telling that, because others may think i am kidding, but i am not (please believe me!), that game is NFS2 and the other program is mysqld.exe (i said before that i am a programmer!). I don't know how mysqld.exe (yes it is the windows version of the famous MySQL DBMS server) can affect NFS2 in such an strange manner, and my curiosity and profession don't let me to forget seeking for the answer, so i decided to take the help of others to see if someone has had a similar experience or a reasonable idea about it.

    Read the article

  • What could possibly cause my computer to power down at random times?

    - by geoffreydv
    I have recently bought a new Power Supply and a new graphics card. My PC ran smoothly for a few months now but since a couple of days I'm having a strange problem. I am trying to isolate the problem to a specific piece of hardware (because if it's either the Power Supply or the Graphics card they are still under warranty). The problem started when I was playing a game (diablo 3). My PC suddenly powered down. I was unable to turn it on again by pressing the power button. I unplugged the power cable for a few seconds and plugged it back in. This time the pc powered on but the indication light turned orange instead of white as it normally does. The fans were not spinning and I did not see anything on my screen. After trying a couple of times I gave up. Two days later I tried again and this time the PC did boot up as usual. Everything looked okay until I tested if the problem was resolved by starting Diablo again. After about two minutes it powered down again as it did the first time. If I don't run any games my PC does power down after about 3-5 hours. Another fact that might be relevant: One time the PC did not shut down immediatly, instead first my graphics "powered down" but the music I was playing kept on playing. After about 20 seconds the pc powered down completely as usual. What I also noticed is that when I boot instantly after a power down, the chance of another power down occuring is much higher. Does anyone have an idea what could be causing this kind of behaviour or has a certain tool to diagnose the specific hardware parts? Thanks Specs: Memory: 6GB Processor: Intel i5 OS: Windows 7 64 bit The PC is a Dell Studio XPS 8100 with a replaced PSU and Graphics card: PSU: Corsair CX500 (500 watt) Graphics card: AMD Radeon 6850

    Read the article

  • Looking for VCS wrapper that tracks system files changing across the whole *nix OS and sends diffs through email

    - by nextus
    I need some software that looks after custom directories across the whole OS (i.e. /etc) and alerting me if someone edit something file inside. Additionally, this tool must automatically commit and push changes into backup server, so I can easily determine when specific change in specific file was made. I'm using cvsbackup right now but I want to create or found something more modern. I think using git as VCS is a great idea. I could have local repository and easily revert changes in my configuration files. Furthermore, pushing changes to the remote repository would helps me to recover my configuration files when the server is fault. It doesn't seems difficult to write some wrapper around the git but there are a lot of problems. For example, I need to track custom directories: /usr/local/nginx/ and /etc/. So the destination point for my git repository is /. I don't need to track the other directories so I must to write overwhelming .gitignore rule: * !.gitignore !/etc/ !etc/* !/usr /usr/* !/usr/local /usr/local/* !/usr/local/nginx !/usr/local/nginx/* It's very daunting and prone to error. So it's maybe a good idea to create intermediate file that wrapper reads and converts to .gitignore format. Additionally, I don't want to keep my .git folder in / partition so I need to set appropriate GIT_DIR and GIT_WORK_TREE variables for git. Is there any ready to use tools for implementation this task? I don't found any but I don't believe that no one needs this feature.

    Read the article

  • BYOD (accessing files) on a domain without joining?

    - by Philip White
    I run a Samba 4 instance at a small private school. This makes a regular Linux server appear as a directory controller. There are two relevant benefits to this: I have a Samba share for people's documents, and I use the Redirected Folders feature to allow any employee to sit down at any PC, log in with their domain credentials, and their My Documents points to network storage. Everyone has a mapped drive (using Group Policy Preferences) to a share specific to their account type. Students can access one share (one share for all students), teachers have another, and office staff have another. However, I would like to allow BYOD (Bring Your Own Device). Some employees are already asking for it with their personal laptops, and I know eventually most everyone will want to. Is there any way to replicate the two features above without having to join PCs to the domain? Joining personal PCs is impractical if only because only professional editions of Windows support this. Ideally, any operating system (including mobile) could access the relevant shares, but of course Windows is key. Offline caching is optional. (I could set up OpenVPN for teachers who want to access their files from home.) The problem with simply giving SSH access to the relevant shares is primarily that Samba 4 relies on ext4 ACLs and ext4 extended attributes to maintain NTFS permissions. Writing files directly to the Linux server would bypass this and would (probably) not be interoperable with Samba4. Right now I am completely flexible. I am even fine with scrapping the whole domain and using some other software for the two features above. How can I allow school employees and students freedom to securely share files without requiring everyone to have specific editions of Windows?

    Read the article

  • Configure Web app for external access (IIS7), allowing only certain users via AD group. All users need internal access

    - by White Island
    We have a Web app running in IIS7 (Server 2008 R2). I now need to allow external access with an SSL certificate, so certain users (e.g. the owner of the company) can use it remotely without VPN. They want to roll out the external access only to those specific users at first (thinking: a Windows credential prompt), BUT everyone will still need access internally (HTTP), without the prompt. I have the SSL cert installed on the server and public DNS configured. I've been trying to figure out how to work the authentication/authorization. I was thinking I need to disable Anonymous authn and set Windows authn, then I keep coming back to 'URL Authorization' in my research for the group setting; however, when I tried URL authz, (removed allow all, added allow rule for the special group), it broke the site internally (403.2 Forbidden, I believe it was). I thought maybe setting up a second site in IIS pointing to the same program would work, but the exact same thing happened (and again with a new app pool, just for kicks). So I guess my question is, how would you do this: allow external access, limited to users in a specific AD group, while still allowing internal access without a credentials prompt? How do I separate the external HTTPS and internal HTTP authorization requirements? Will I need to just copy the entire contents of the app in Windows Explorer to a new folder and create my external site from that? Is Windows authentication the correct option for this? I did come across this, which refers to creating a custom module. While it sounds like a solution, it's not one I'm familiar with, and I just wondered if there is a simpler way to get it to work: http://forums.iis.net/p/1182792/2000775.aspx Thanks!

    Read the article

  • Making Sense of ASP.NET Paths

    - by Rick Strahl
    ASP.Net includes quite a plethora of properties to retrieve path information about the current request, control and application. There's a ton of information available about paths on the Request object, some of it appearing to overlap and some of it buried several levels down, and it can be confusing to find just the right path that you are looking for. To keep things straight I thought it a good idea to summarize the path options along with descriptions and example paths. I wrote a post about this a long time ago in 2004 and I find myself frequently going back to that page to quickly figure out which path I’m looking for in processing the current URL. Apparently a lot of people must be doing the same, because the original post is the second most visited even to this date on this blog to the tune of nearly 500 hits per day. So, I decided to update and expand a bit on the original post with a little more information and clarification based on the original comments. Request Object Paths Available Here's a list of the Path related properties on the Request object (and the Page object). Assume a path like http://www.west-wind.com/webstore/admin/paths.aspx for the paths below where webstore is the name of the virtual. .blackborder td { border-bottom: solid 1px silver; border-left: solid 1px silver; } Request Property Description and Value ApplicationPath Returns the web root-relative logical path to the virtual root of this app. /webstore/ PhysicalApplicationPath Returns local file system path of the virtual root for this app. c:\inetpub\wwwroot\webstore PhysicalPath Returns the local file system path to the current script or path. c:\inetpub\wwwroot\webstore\admin\paths.aspx Path FilePath CurrentExecutionFilePath All of these return the full root relative logical path to the script page including path and scriptname. CurrentExcecutionFilePath will return the ‘current’ request path after a Transfer/Execute call while FilePath will always return the original request’s path. /webstore/admin/paths.aspx AppRelativeCurrentExecutionFilePath Returns an ASP.NET root relative virtual path to the script or path for the current request. If in  a Transfer/Execute call the transferred Path is returned. ~/admin/paths.aspx PathInfo Returns any extra path following the script name. If no extra path is provided returns the root-relative path (returns text in red below). string.Empty if no PathInfo is available. /webstore/admin/paths.aspx/ExtraPathInfo RawUrl Returns the full root relative URL including querystring and extra path as a string. /webstore/admin/paths.aspx?sku=wwhelp40 Url Returns a fully qualified URL including querystring and extra path. Note this is a Uri instance rather than string. http://www.west-wind.com/webstore/admin/paths.aspx?sku=wwhelp40 UrlReferrer The fully qualified URL of the page that sent the request. This is also a Uri instance and this value is null if the page was directly accessed by typing into the address bar or using an HttpClient based Referrer client Http header. http://www.west-wind.com/webstore/default.aspx?Info Control.TemplateSourceDirectory Returns the logical path to the folder of the page, master or user control on which it is called. This is useful if you need to know the path only to a Page or control from within the control. For non-file controls this returns the Page path. /webstore/admin/ As you can see there’s a ton of information available there for each of the three common path formats: Physical Path is an OS type path that points to a path or file on disk. Logical Path is a Web path that is relative to the Web server’s root. It includes the virtual plus the application relative path. ~/ (Root-relative) Path is an ASP.NET specific path that includes ~/ to indicate the virtual root Web path. ASP.NET can convert virtual paths into either logical paths using Control.ResolveUrl(), or physical paths using Server.MapPath(). Root relative paths are useful for specifying portable URLs that don’t rely on relative directory structures and very useful from within control or component code. You should be able to get any necessary format from ASP.NET from just about any path or script using these mechanisms. ~/ Root Relative Paths and ResolveUrl() and ResolveClientUrl() ASP.NET supports root-relative virtual path syntax in most of its URL properties in Web Forms. So you can easily specify a root relative path in a control rather than a location relative path: <asp:Image runat="server" ID="imgHelp" ImageUrl="~/images/help.gif" /> ASP.NET internally resolves this URL by using ResolveUrl("~/images/help.gif") to arrive at the root-relative URL of /webstore/images/help.gif which uses the Request.ApplicationPath as the basepath to replace the ~. By convention any custom Web controls also should use ResolveUrl() on URL properties to provide the same functionality. In your own code you can use Page.ResolveUrl() or Control.ResolveUrl() to accomplish the same thing: string imgPath = this.ResolveUrl("~/images/help.gif"); imgHelp.ImageUrl = imgPath; Unfortunately ResolveUrl() is limited to WebForm pages, so if you’re in an HttpHandler or Module it’s not available. ASP.NET Mvc also has it’s own more generic version of ResolveUrl in Url.Decode: <script src="<%= Url.Content("~/scripts/new.js") %>" type="text/javascript"></script> which is part of the UrlHelper class. In ASP.NET MVC the above sort of syntax is actually even more crucial than in WebForms due to the fact that views are not referencing specific pages but rather are often path based which can lead to various variations on how a particular view is referenced. In a Module or Handler code Control.ResolveUrl() unfortunately is not available which in retrospect seems like an odd design choice – URL resolution really should happen on a Request basis not as part of the Page framework. Luckily you can also rely on the static VirtualPathUtility class: string path = VirtualPathUtility.ToAbsolute("~/admin/paths.aspx"); VirtualPathUtility also many other quite useful methods for dealing with paths and converting between the various kinds of paths supported. One thing to watch out for is that ToAbsolute() will throw an exception if a query string is provided and doesn’t work on fully qualified URLs. I wrote about this topic with a custom solution that works fully qualified URLs and query strings here (check comments for some interesting discussions too). Similar to ResolveUrl() is ResolveClientUrl() which creates a fully qualified HTTP path that includes the protocol and domain name. It’s rare that this full resolution is needed but can be useful in some scenarios. Mapping Virtual Paths to Physical Paths with Server.MapPath() If you need to map root relative or current folder relative URLs to physical URLs or you can use HttpContext.Current.Server.MapPath(). Inside of a Page you can do the following: string physicalPath = Server.MapPath("~/scripts/ww.jquery.js")); MapPath is pretty flexible and it understands both ASP.NET style virtual paths as well as plain relative paths, so the following also works. string physicalPath = Server.MapPath("scripts/silverlight.js"); as well as dot relative syntax: string physicalPath = Server.MapPath("../scripts/jquery.js"); Once you have the physical path you can perform standard System.IO Path and File operations on the file. Remember with physical paths and IO or copy operations you need to make sure you have permissions to access files and folders based on the Web server user account that is active (NETWORK SERVICE, ASPNET typically). Note the Server.MapPath will not map up beyond the virtual root of the application for security reasons. Server and Host Information Between these settings you can get all the information you may need to figure out where you are at and to build new Url if necessary. If you need to build a URL completely from scratch you can get access to information about the server you are accessing: Server Variable Function and Example SERVER_NAME The of the domain or IP Address wwww.west-wind.com or 127.0.0.1 SERVER_PORT The port that the request runs under. 80 SERVER_PORT_SECURE Determines whether https: was used. 0 or 1 APPL_MD_PATH ADSI DirectoryServices path to the virtual root directory. Note that LM typically doesn’t work for ADSI access so you should replace that with LOCALHOST or the machine’s NetBios name. /LM/W3SVC/1/ROOT/webstore Request.Url and Uri Parsing If you still need more control over the current request URL or  you need to create new URLs from an existing one, the current Request.Url Uri property offers a lot of control. Using the Uri class and UriBuilder makes it easy to retrieve parts of a URL and create new URLs based on existing URL. The UriBuilder class is the preferred way to create URLs – much preferable over creating URIs via string concatenation. Uri Property Function Scheme The URL scheme or protocol prefix. http or https Port The port if specifically specified. DnsSafeHost The domain name or local host NetBios machine name www.west-wind.com or rasnote LocalPath The full path of the URL including script name and extra PathInfo. /webstore/admin/paths.aspx Query The query string if any ?id=1 The Uri class itself is great for retrieving Uri parts, but most of the properties are read only if you need to modify a URL in order to change it you can use the UriBuilder class to load up an existing URL and modify it to create a new one. Here are a few common operations I’ve needed to do to get specific URLs: Convert the Request URL to an SSL/HTTPS link For example to take the current request URL and converted  it to a secure URL can be done like this: UriBuilder build = new UriBuilder(Request.Url); build.Scheme = "https"; build.Port = -1; // don't inject port Uri newUri = build.Uri; string newUrl = build.ToString(); Retrieve the fully qualified URL without a QueryString AFAIK, there’s no native routine to retrieve the current request URL without the query string. It’s easy to do with UriBuilder however: UriBuilder builder = newUriBuilder(Request.Url); builder.Query = ""; stringlogicalPathWithoutQuery = builder.ToString(); What else? I took a look through the old post’s comments and addressed as many of the questions and comments that came up in there. With a few small and silly exceptions this update post handles most of these. But I’m sure there are a more things that go in here. What else would be useful to put onto this post so it serves as a nice all in one place to go for path references? If you think of something leave a comment and I’ll try to update the post with it in the future.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Parallelism in .NET – Part 9, Configuration in PLINQ and TPL

    - by Reed
    Parallel LINQ and the Task Parallel Library contain many options for configuration.  Although the default configuration options are often ideal, there are times when customizing the behavior is desirable.  Both frameworks provide full configuration support. When working with Data Parallelism, there is one primary configuration option we often need to control – the number of threads we want the system to use when parallelizing our routine.  By default, PLINQ and the TPL both use the ThreadPool to schedule tasks.  Given the major improvements in the ThreadPool in CLR 4, this default behavior is often ideal.  However, there are times that the default behavior is not appropriate.  For example, if you are working on multiple threads simultaneously, and want to schedule parallel operations from within both threads, you might want to consider restricting each parallel operation to using a subset of the processing cores of the system.  Not doing this might over-parallelize your routine, which leads to inefficiencies from having too many context switches. In the Task Parallel Library, configuration is handled via the ParallelOptions class.  All of the methods of the Parallel class have an overload which accepts a ParallelOptions argument. We configure the Parallel class by setting the ParallelOptions.MaxDegreeOfParallelism property.  For example, let’s revisit one of the simple data parallel examples from Part 2: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re looping through an image, and calling a method on each pixel in the image.  If this was being done on a separate thread, and we knew another thread within our system was going to be doing a similar operation, we likely would want to restrict this to using half of the cores on the system.  This could be accomplished easily by doing: var options = new ParallelOptions(); options.MaxDegreeOfParallelism = Math.Max(Environment.ProcessorCount / 2, 1); Parallel.For(0, pixelData.GetUpperBound(0), options, row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Now, we’re restricting this routine to using no more than half the cores in our system.  Note that I included a check to prevent a single core system from supplying zero; without this check, we’d potentially cause an exception.  I also did not hard code a specific value for the MaxDegreeOfParallelism property.  One of our goals when parallelizing a routine is allowing it to scale on better hardware.  Specifying a hard-coded value would contradict that goal. Parallel LINQ also supports configuration, and in fact, has quite a few more options for configuring the system.  The main configuration option we most often need is the same as our TPL option: we need to supply the maximum number of processing threads.  In PLINQ, this is done via a new extension method on ParallelQuery<T>: ParallelEnumerable.WithDegreeOfParallelism. Let’s revisit our declarative data parallelism sample from Part 6: double min = collection.AsParallel().Min(item => item.PerformComputation()); Here, we’re performing a computation on each element in the collection, and saving the minimum value of this operation.  If we wanted to restrict this to a limited number of threads, we would add our new extension method: int maxThreads = Math.Max(Environment.ProcessorCount / 2, 1); double min = collection .AsParallel() .WithDegreeOfParallelism(maxThreads) .Min(item => item.PerformComputation()); This automatically restricts the PLINQ query to half of the threads on the system. PLINQ provides some additional configuration options.  By default, PLINQ will occasionally revert to processing a query in parallel.  This occurs because many queries, if parallelized, typically actually cause an overall slowdown compared to a serial processing equivalent.  By analyzing the “shape” of the query, PLINQ often decides to run a query serially instead of in parallel.  This can occur for (taken from MSDN): Queries that contain a Select, indexed Where, indexed SelectMany, or ElementAt clause after an ordering or filtering operator that has removed or rearranged original indices. Queries that contain a Take, TakeWhile, Skip, SkipWhile operator and where indices in the source sequence are not in the original order. Queries that contain Zip or SequenceEquals, unless one of the data sources has an originally ordered index and the other data source is indexable (i.e. an array or IList(T)). Queries that contain Concat, unless it is applied to indexable data sources. Queries that contain Reverse, unless applied to an indexable data source. If the specific query follows these rules, PLINQ will run the query on a single thread.  However, none of these rules look at the specific work being done in the delegates, only at the “shape” of the query.  There are cases where running in parallel may still be beneficial, even if the shape is one where it typically parallelizes poorly.  In these cases, you can override the default behavior by using the WithExecutionMode extension method.  This would be done like so: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .Select(i => i.PerformComputation()) .Reverse(); Here, the default behavior would be to not parallelize the query unless collection implemented IList<T>.  We can force this to run in parallel by adding the WithExecutionMode extension method in the method chain. Finally, PLINQ has the ability to configure how results are returned.  When a query is filtering or selecting an input collection, the results will need to be streamed back into a single IEnumerable<T> result.  For example, the method above returns a new, reversed collection.  In this case, the processing of the collection will be done in parallel, but the results need to be streamed back to the caller serially, so they can be enumerated on a single thread. This streaming introduces overhead.  IEnumerable<T> isn’t designed with thread safety in mind, so the system needs to handle merging the parallel processes back into a single stream, which introduces synchronization issues.  There are two extremes of how this could be accomplished, but both extremes have disadvantages. The system could watch each thread, and whenever a thread produces a result, take that result and send it back to the caller.  This would mean that the calling thread would have access to the data as soon as data is available, which is the benefit of this approach.  However, it also means that every item is introducing synchronization overhead, since each item needs to be merged individually. On the other extreme, the system could wait until all of the results from all of the threads were ready, then push all of the results back to the calling thread in one shot.  The advantage here is that the least amount of synchronization is added to the system, which means the query will, on a whole, run the fastest.  However, the calling thread will have to wait for all elements to be processed, so this could introduce a long delay between when a parallel query begins and when results are returned. The default behavior in PLINQ is actually between these two extremes.  By default, PLINQ maintains an internal buffer, and chooses an optimal buffer size to maintain.  Query results are accumulated into the buffer, then returned in the IEnumerable<T> result in chunks.  This provides reasonably fast access to the results, as well as good overall throughput, in most scenarios. However, if we know the nature of our algorithm, we may decide we would prefer one of the other extremes.  This can be done by using the WithMergeOptions extension method.  For example, if we know that our PerformComputation() routine is very slow, but also variable in runtime, we may want to retrieve results as they are available, with no bufferring.  This can be done by changing our above routine to: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.NotBuffered) .Select(i => i.PerformComputation()) .Reverse(); On the other hand, if are already on a background thread, and we want to allow the system to maximize its speed, we might want to allow the system to fully buffer the results: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.FullyBuffered) .Select(i => i.PerformComputation()) .Reverse(); Notice, also, that you can specify multiple configuration options in a parallel query.  By chaining these extension methods together, we generate a query that will always run in parallel, and will always complete before making the results available in our IEnumerable<T>.

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >