Search Results

Search found 15866 results on 635 pages for 'css practice'.

Page 505/635 | < Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >

  • Why C++ people loves multithreading when it comes to performances?

    - by user1849534
    I have a question, it's about why programmers seems to love concurrency and multi-threaded programs in general. I'm considering 2 main approach here: an async approach basically based on signals, or just an async approach as called by many papers and languages like the new C# 5.0 for example, and a "companion thread" that maanges the policy of your pipeline a concurrent approach or multi-threading approach I will just say that I'm thinking about the hardware here and the worst case scenario, and I have tested this 2 paradigms myself, the async paradigm is a winner at the point that I don't get why people 90% of the time talk about concurrency when they wont to speed up things or make a good use of their resources. I have tested multi-threaded programs and async program on an old machine with an Intel quad-core that doesn't offer a memory controller inside the CPU, the memory is managed entirely by the motherboard, well in this case performances are horrible with a multi-threaded application, even a relatively low number of threads like 3-4-5 can be a problem, the application is unresponsive and is just slow and unpleasant. A good async approach is, on the other hand, probably not faster but it's not worst either, my application just waits for the result and doesn't hangs, it's responsive and there is a much better scaling going on. I have also discovered that a context change in the threading world it's not that cheap in real world scenario, it's infact quite expensive especially when you have more than 2 threads that need to cycle and swap among each other to be computed. On modern CPUs the situation it's not really that different, the memory controller it's integrated but my point is that an x86 CPUs is basically a serial machine and the memory controller works the same way as with the old machine with an external memory controller on the motherboard. The context switch is still a relevant cost in my application and the fact that the memory controller it's integrated or that the newer CPU have more than 2 core it's not bargain for me. For what i have experienced the concurrent approach is good in theory but not that good in practice, with the memory model imposed by the hardware, it's hard to make a good use of this paradigm, also it introduces a lot of issues ranging from the use of my data structures to the join of multiple threads. Also both paradigms do not offer any security abut when the task or the job will be done in a certain point in time, making them really similar from a functional point of view. According to the X86 memory model, why the majority of people suggest to use concurrency with C++ and not just an async aproach ? Also why not considering the worst case scenario of a computer where the context switch is probably more expensive than the computation itself ?

    Read the article

  • Normal Redundancy (Double Mirroring) Option Available

    - by TammyBednar
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} The Oracle Database Appliance 2.4 Patch was released last week and provides you an option of ASM normal redundancy (double mirroring) during the initial deployment of the Database Appliance. The default deployment of the Oracle Database Appliance is high redundancy for the +DATA and +RECO disk groups. While there is 12TB of raw shared storage available, the Database Backup Location and Disk Group Redundancy govern how much usable storage is presented after the initial deployment is completed. The Database Backup Location options are Local or External. When the Local Backup Option is selected, this means that 60% of the available shared storage will be allocated for the Fast Recovery Area that contains database backups and archive logs. The External Backup Option will allocate 20% of the available shared storage to the Fast Recovery Area. So, let’s look at an example of High Redundancy and External Backups. Disk Group Redundancy – High --> Triple Mirroring to provide ~4TB of available storage Database Backup Location – External --> 20% of available shared storage allocated to +RECO +DATA = 3.2TB of usable storage, +RECO = 0.8TB of usable storage What about Normal Redundancy with External Backups? Disk Group Redundancy – Normal --> Double Mirroring to provide ~6TB of available storage Database Backup Location – External --> 20% of available shared storage allocated to +RECO +DATA = 4.8TB of usable storage, +RECO = 1.2TB of usable storage As a best practice, we would recommend using Normal Redundancy for your test and/or development Oracle Database Appliances and High Redundancy for production.

    Read the article

  • Is it unusual for a small company (15 developers) not to use managed source/version control?

    - by LordScree
    It's not really a technical question, but there are several other questions here about source control and best practice. The company I work for (which will remain anonymous) uses a network share to host its source code and released code. It's the responsibility of the developer or manager to manually move source code to the correct folder depending on whether it's been released and what version it is and stuff. We have various spreadsheets dotted around where we record file names and versions and what's changed, and some teams also put details of different versions at the top of each file. Each team (2-3 teams) seems to do this differently within the company. As you can imagine, it's an organised mess - organised, because the "right people" know where their stuff is, but a mess because it's all different and it relies on people remembering what to do at any one time. One good thing is that everything is backed up on a nightly basis and kept indefinitely, so if mistakes are made, snapshots can be recovered. I've been trying to push for some kind of managed source control for a while, but I can't seem to get enough support for it within the company. My main arguments are: We're currently vulnerable; at any point someone could forget to do one of the many release actions we have to do, which could mean whole versions are not stored correctly. It could take hours or even days to piece a version back together if necessary We're developing new features along with bug fixes, and often have to delay the release of one or the other because some work has not been completed yet. We also have to force customers to take versions that include new features even if they just want a bug fix, because there's only really one version we're all working on We're experiencing problems with Visual Studio because multiple developers are using the same projects at the same time (not the same files, but it's still causing problems) There are only 15 developers, but we all do stuff differently; wouldn't it be better to have a standard company-wide approach we all have to follow? My questions are: Is it normal for a group of this size not to have source control? I have so far been given only vague reasons for not having source control - what reasons would you suggest could be valid for not implementing source control, given the information above? Are there any more reasons for source control that I could add to my arsenal? I'm asking mainly to get a feel for why I have had so much resistance, so please answer honestly. I'll give the answer to the person I believe has taken the most balanced approach and has answered all three questions. Thanks in advance

    Read the article

  • "Collection Wrapper" pattern - is this common?

    - by Prog
    A different question of mine had to do with encapsulating member data structures inside classes. In order to understand this question better please read that question and look at the approach discussed. One of the guys who answered that question said that the approach is good, but if I understood him correctly - he said that there should be a class existing just for the purpose of wrapping the collection, instead of an ordinary class offering a number of public methods just to access the member collection. For example, instead of this: class SomeClass{ // downright exposing the concrete collection. Things[] someCollection; // other stuff omitted Thing[] getCollection(){return someCollection;} } Or this: class SomeClass{ // encapsulating the collection, but inflating the class' public interface. Thing[] someCollection; // class functionality omitted. public Thing getThing(int index){ return someCollection[index]; } public int getSize(){ return someCollection.length; } public void setThing(int index, Thing thing){ someCollection[index] = thing; } public void removeThing(int index){ someCollection[index] = null; } } We'll have this: // encapsulating the collection - in a different class, dedicated to this. class SomeClass{ CollectionWrapper someCollection; CollectionWrapper getCollection(){return someCollection;} } class CollectionWrapper{ Thing[] someCollection; public Thing getThing(int index){ return someCollection[index]; } public int getSize(){ return someCollection.length; } public void setThing(int index, Thing thing){ someCollection[index] = thing; } public void removeThing(int index){ someCollection[index] = null; } } This way, the inner data structure in SomeClass can change without affecting client code, and without forcing SomeClass to offer a lot of public methods just to access the inner collection. CollectionWrapper does this instead. E.g. if the collection changes from an array to a List, the internal implementation of CollectionWrapper changes, but client code stays the same. Also, the CollectionWrapper can hide certain things from the client code - from example, it can disallow mutation to the collection by not having the methods setThing and removeThing. This approach to decoupling client code from the concrete data structure seems IMHO pretty good. Is this approach common? What are it's downfalls? Is this used in practice?

    Read the article

  • Windows web server and SQL Server on same dedicated server

    - by asinc
    I'm currently trying to decide on the best approach to handle hosting a few moderate traffic websites for production e-commerce and online applications. We'd like to move to a dedicated server and are looking at this as the most likely machine: Quad Core Intel Core2Quad Q9550 Processor, 2.83 Ghz X 4, 4 GB Kingston Ram This would run Windows Web Server 2008 R2 x64 and potentially also Sql Server Web 2008 and SmarterMail server. Given that we already pay for a high-end VPS for development, testing, shared version control we'd like to avoid going with two servers for production. We'd like to avoid using shared sql server hosting and have thought of using the development server as the database server as an option too - but potentially a security risk due to use for development by internal and contract users. The questions are: - Do you feel there would be performance degradation by running this on the same machine? - Are there significant issues to be concerned about if we do this? We understand that best practice would be to run separate db and app servers but the volume of traffic is currently not that high and adding another server just for database is currently too costly. - What are others doing out there? Alternatively, would you recommend instead going with two separate VPS servers with 2GB RAM each on Hyper-v which would be about the same cost as the single dedicated server above? Thanks!

    Read the article

  • Cisco AnyConnect on IOS 12.4(20)T

    - by natacado
    There are plenty of tutorials on setting up AnyConnect on an ASA unit, and a handful of links noting that IOS 12.4(15) and later support AnyConnect, but I can't seem to find any good documentation about how to setup AnyConnect on IOS; most tutorials assume you only want a clientless VPN on IOS. the best I've found is this document on Cisco's site, but it's not working for me in practice - see below. This is all on a Cisco 881W: router#show version | include Version Cisco IOS Software, C880 Software (C880DATA-UNIVERSALK9-M), Version 12.4(20)T1, RELEASE SOFTWARE (fc3) ROM: System Bootstrap, Version 12.4(15r)XZ2, RELEASE SOFTWARE (fc1) The old SSL VPN Client seems to install just fine: router#show webvpn install status svc SSLVPN Package SSL-VPN-Client version installed: CISCO STC win2k+ 1.0.0 1,1,4,176 Thu 08/16/2007 12:37:00.43 However, when I install the AnyConnect client, after authenticating it hangs for a while during the self-update process, and stops with an error that the "AnyConnect package unavailable or corrupted." When I try to install the AnyConnect package on the router, I'm told that it's an invalid archive: router(config)#webvpn install svc flash:/webvpn/anyconnect-win-2.3.2016-k9.pkg SSLVPN Package SSL-VPN-Client (seq:2): installed Error: Invalid Archive Does anyone have a good sample on how to get the 2.x AnyConnect clients working with a Cisco device running IOS?

    Read the article

  • Apache2 with SSL and mod_jk on SUSE Linux Enterprise | Apache always starts SSL disabled

    - by Shaakunthala
    I have installed Apache2 (with mod_ssl enabled) on SUSE Linux Enterprise Server 11 (x86_64) (patchlevel 1), using YaST. Once installed, I tested whether everything works fine so far. SSL also worked fine. Just 'apache2ctl start' was enough to make everything working. Then I installed mod_jk and applied the following configuration changes to make it work. /etc/sysconfig/apache2 (added JK module) APACHE_MODULES="... ... ... ... ...jk" /etc/apache2/httpd.conf (included mod_jk.conf) Include /etc/apache2/mod_jk.conf /etc/apache2/mod_jk.conf (new file) JkLogFile /var/log/apache2/mod_jk.log JkWorkersFile /etc/apache2/mod_jk/workers.properties JkShmFile /etc/apache2/mod_jk/mod_jk.shm # Set the jk log level [debug/error/info] JkLogLevel info # Select the timestamp log format JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " mod_jk.log & mod_jk.shm files were also created. /etc/apache2/mod_jk/workers.properties (new file) worker.list=jira worker.jira.type=ajp13 worker.jira.host=127.0.0.1 worker.jira.port=8009 Once everything is done, I've restarted Apache using the following command, apache2ctl restart Then I observed that SSL is not working. When checked with telnet, I observed that port 443 is not open. In listen.conf, if I specify port 443 bypassing 'IfDefine' and 'IfModule' conditions, then SSL works properly. This is likely the 'SSL' flag is not passed to Apache. I did not make this a persistent change as I thought it might not be the correct practice. I checked /etc/sysconfig/apache2 to see if this has been altered, but it is there. Although this flag is enabled, Apache won't start with SSL support. APACHE_SERVER_FLAGS="SSL" Finally, I had to start Apache using the following command, apache2ctl -D SSL -k start And my question is, why did Apache (or apache2ctl) fail to start with SSL when I have installed and correctly configured mod_jk, and no other configuration changes were applied? Have I missed anything? Thanks in advance. -- Shaakunthala

    Read the article

  • nginx subdomains improperly act like wildcard?

    - by binjured
    I have an odd problem with nginx subdomains. First, my configuration: server { listen 443 ssl; server_name secure.example.com; ssl_certificate example.crt; ssl_certificate_key example.key; keepalive_timeout 70; location / { fastcgi_pass 127.0.0.1:8000; ... } } server { listen 80; server_name example.com www.example.com; location / { fastcgi_pass 127.0.0.1:8000; ... } } The idea being that I have a secure domain, secure.example.com and a normal domain, example.com. In practice, I can go to https://example.com and http://secure.example.com. I worked around the second issue with an intermediary server: server { listen 80; server_name secure.example.com; rewrite ^(.*) https://secure.example.com$1 permanent; } But this is not an optimal solution and I'd have to create another one to redirect https on the tld to the subdomain. I feel like I must be doing something wrong if I need multiple servers like that. Why does https://example.com work when there is no server listening on 443 there? Shouldn't it just fail to connect? I'm rather confused.

    Read the article

  • IP address spoofing using Source Routing

    - by iamrohitbanga
    With IP options we can specify the route we want an IP packet to take while connecting to a server. If we know that a particular server provides some extra functionality based on the IP address can we not utilize this by spoofing an IP packet so that the source IP address is the privileged IP address and one of the hosts on the Source Routing is our own. So if the privileged IP address is x1 and server IP address is x2 and my own IP address is x3. I send a packet from x1 to x2 which is supposed to pass through x3. x1 does not actually send the packet. It is just that x2 thinks the packet came from x1 via x3. Now in response if x2 uses the same routing policy (as a matter of courtesy to x1) then all packets would be received by x3. Will the destination typically use the same IP address sequences as specified in the routing header so that packets coming from the server pass through my IP where I can get the required information? Can we not spoof a TCP connection in the above case? Is this attack used in practice?

    Read the article

  • Outgrew MongoDB … now what?

    - by samsmith
    We dump debug and transaction logs into mongodb. We really like mongodb because: Blazing insert perf document oriented Ability to let the engine drop inserts when needed for performance But there is this big problem with mongodb: The index must fit in physical RAM. In practice, this limits us to 80-150gb of raw data (we currently run on a system with 16gb RAM). Sooooo, for us to have 500gb or a tb of data, we would need 50gb or 80gb of RAM. Yes, I know this is possible. We can add servers and use mongo sharding. We can buy a special server box that can take 100 or 200 gb of RAM, but this is the tail wagging the dog! We could spend boucoup $$$ on hardware to run FOSS, when SQL Server Express can handle WAY more data on WAY less hardware than Mongo (SQL Server does not meet our architectural desires, or we would use it!) We are not going to spend huge $ on hardware here, because it is necessary only because of the Mongo architecture, not because of the inherent processing/storage needs. (And sharding? Please! Cost aside, who needs the ongoing complexity of three, five, or more servers to manage a relatively small load?) Bottom line: MongoDB is FOSS, but we gotta spend $$$$$$$ on hardware to run it? We sould rather buy commercial SW! I am sure we are not the first to hit this issue, so we ask the community: Where do we go next? (We already run Mongo v2) Thanks!!

    Read the article

  • run script as another user from a root script with no tty stdin

    - by viktor tron
    Using CentOs, I want to run a script as user 'training' as a system service. I use daemontools to monitor the process, which needs a launcher script that is run as root and has no tty standard in. Below I give my four different attempts which all fail. : #!/bin/bash exec >> /var/log/training_service.log 2>&1 setuidgid training training_command This last line is not good enough since for training_command, we need environment for trqaining user to be set. : su - training -c 'training_command' This looks like it (http://serverfault.com/questions/44400/run-a-shell-script-as-a-different-user) but gives 'standard in must be tty' as su making sure tty is present to potentially accept password. I know I could make this disappear by modifying /etc/sudoers (a la http://superuser.com/questions/119376/bash-su-script-giving-an-error-standard-in-must-be-a-tty) but i am reluctant and unsure of consequences. : runuser - training -c 'training_command' This one gives runuser: cannot set groups: Connection refused. I found no sense or resolution to this error. : ssh -p100 training@localhost 'source $HOME/.bashrc; training_command' This one is more of a joke to show desparation. Even this one fails with Host key verification failed. (the host key IS in known_hosts, etc). Note: all of 2,3,4 work as they should if I run the wrapper script from a root shell. problems only occur if the system service monitor (daemontools) launches it (no tty terminal I guess). I am stuck. Is this something so hard to achieve? I appreciate all insight and guidance to best practice. (this has also been posted on superuser: http://superuser.com/questions/434235/script-calling-script-as-other-user)

    Read the article

  • Changing subnet-mask of class-c network host to 255.255.0.0

    - by Prashant Mandhare
    We have a existing class-c network with IP address range 11.22.33.44/24 (just for example). My domain controller has been configured within this subnet. So all servers within this subnet have subnet mask configured to 255.255.255.0. Now we have got a new subnet with IP address 11.22.88.99/24 (note that only last 2 octets have changed). I want all new hosts in this new subnet to join my existing DC. For this we have configured firewall properly so allow this. (so there is no issue with firewall). But initially I was not able to join hosts in new subnet in existing domain. Later I doubted on subnet mask used in domain controller (255.255.255.0) and for testing purpose I changed it to 255.255.0.0, it worked like charm, i was able to join subnet-2 hosts in subnet-1 domain. Now i am wondering whether it will be good practice to change subnet mask of a class-c network to 255.255.0.0? Can any issues arise due to this? Experts please provide your opinion.

    Read the article

  • Cisco SG200 vlan issue in ESXi VSA cluster

    - by George
    I have three Cisco SG200-26 switches, and I also have two ESXi hosts that I have connected like shown in the below "best practice" map by VMware: http://communities.vmware.com/servlet/JiveServlet/previewBody/17393-102-1-22458/VSA_networking_map.pdf Even though I created the VLANs in the SG200 and I set the two VLANs (508 and 608) as allowed for these untagged ports (where my ESX NIC's are connected), I can not ping from host 1 to host 2 when configuring the NIC's to use 608 VLAN. Am I missing something? my IP's are all in the 192.168. range, and the only reason I need the VLANs is to isolate the traffic of VSA back-end internally, only the two hosts will be using the VLANs. So I think I do not have to create virtual interfaces on my router since that's the case, is my understanding correct? Also sending my switch config screenshot below.. all 3 switches have the latest firmware (it seems these were originally linksys and got rebranded as cisco after the acquisition) http://img31.imageshack.us/img31/2503/switch.gif Any ideas what to change on the Cisco SG200 to make this work , would be appreciated! The second VLAN (608) only needs two IP's: 192.168.0.1 and 192.168.0.2 The first VLAN (508) will have about 15 IP's for ESXi Management and VSA cluster service, I could use either 192.168.1.xx or 10.0.1.xx The rest of my network (about 50 clients) is in 192.168.1.xx range VMware also states that the VLAN protocol on the physical switch must be 802.1Q, not ISL, anyone knows which of the two my SG200-26 uses? In addition to that, the only requirement from VSA is that my two hosts: -Are in the same subnet. -Have static IP addresses set. -Have the same Default Gateway configured. If I need inter-vlan routing for this, I suppose I have to create virtual interfaces on my sonicwall, and assign an IP for each VLAN, and then set routes between them? Thank you for your time!

    Read the article

  • To clone or to automate a system installation?

    - by Shtééf
    Let's say you're setting up a cluster of servers performing the same task. Or say you're just setting up a bunch of different servers, but you expect to use a base configuration on all of your servers. Would it be better practice to create a base image and clone it, or to automate the installation and configuration? I occasionally end up in this argument with my boss, in situations where we're time-pressed. When he sees me struggle with perfecting the automation, his suggestion is often to clone the entire disk to the other machines. But my instinct has always been to avoid cloning. This is mostly from an Ubuntu perspective, but the question is fairly general. My reasons for avoiding cloning are: On a typical install, even if it's fresh, there are already several unique identifiers installed: filesystem UUIDs, SSH host keys, among others. These would have to be regenerated. Network needs to be reconfigured for each clone. This would need to be done off-line, of course, or the settings will conflict with other machines on the network. On the other hand, some of the cloning advantages are quite clear as well: (Initially?) less effort required than automating configuration. Tools exist to quickly address (some) of the above disadvantages. (I can see right through my own bias there.)

    Read the article

  • Managing Many External Hosts Using EC2 and Route 53

    - by futureal
    Looking for a "best practice" answer to managing externally-addressable hosts using the combination of Amazon EC2 and Amazon Route 53, without using Elastic IPs for each host. In my scenario I will have 30+ hosts that need to be accessible from outside EC2, so directly using internal DNS will not work. In the past, I have addressed hosts by assigning an elastic IP to that host (let's say, 55.55.55.55) and then creating an associated A record. For example, let's say I want to create "ec2-corp01.mydomain.com" I might do: ec2-corp01.mydomain.com. A 55.55.55.55 300 Then on that EC2 instance, I would assign the Elastic IP of 55.55.55.55, and everything works fine. Of course, to make this work, I need to have one Elastic IP per instance, which is something I'd like to avoid if possible; I'd like the infrastructure to be more dynamic. So my thought is to try something like: Create a script that queries the internal EC2 tools to determine an instance's private hostname On instance boot, call that script to determine its hostname, and then using the command-line Route 53 interface to find and update that hostname to its current internal hostname Since the host will have a relatively low TTL (let's say 300 as above, or 5 minutes) it should take effect pretty quickly Is this a good idea? Is there a better or more widely accepted way to handle it? If it IS a good idea, what type of record should I be creating? A CNAME that points to the internal host, like ec2-55-55-55-55.compute-1.amazonaws.com? Is an A record better or worse? Thanks!

    Read the article

  • Why is my own e-mail address not listed in the To field?

    - by Sammy
    I have received a suspicious e-mail. I am not affiliated with the company mentioned in the e-mail body, or the signer. However, I have been using the app they mention in the e-mail. They are inviting me to a Beta test. But the e-mail is not by the original author of the app. But I'm thinking they might have hired an external company to do this version of the app. There is a link to a TestFlight page. So I'm not sure what to make of this. Now this is what mainly arose my attention. From: Anders Bergman <[email protected]> To: Bon Support Cc: Subject: Test av nya BBK för Android This is how it shows up in Outlook 2010. The "To" field is addressed to "Bon Support" and when I double-click on that I see [email protected]. I can assure you that none of these are my e-mail addresses. So where the heck is my own e-mail address? How could I have received this if it was addressed to someone else? If not spammers and skimmers and other criminals, who else is using this practice and why? And how can I tell now to what e-mail account I received this? I have more than one account set up in Outlook.

    Read the article

  • Network Management Cable Labeling Techniques and their alternatives [closed]

    - by Alex
    Possible Duplicate: What is the most effective solution you used to label cables? Yes i know there are a lot of howtos and already answered questions about this topic, like this one: How do you organise the cables in your racks? Currently i am searching the web for different techniques (alternatives) for labeling the cables at the server racks and/or data centers. Unfortunately i do not have any experience with labeling/documentation of network cables in a large scale. As far as I could lookup by now the current labeling techniques are coloring and a self defined print-labeling technique (numbering, text) maybe also according to a standard which are usually used. I want to know if QR, RFID (ok RFID in a data center would be stupid due to the radio frequency wouldn't it be?), Barcodes or similar (??) have already been used by some administrators or why they did not consider such techniques at all? Too complicated (with QR scanner etc..) if you are in front of the cables and want to get quick feedback for what the cable is? What alternatives are out there? Advantages/Disadvantages? Best-Practice? I would appreciate any help on this topic, thank you! Regards, Alex

    Read the article

  • Domain Environment + Certificate Authority + Server 2008 R2

    - by user1110302
    I have recently been delegated the task to setup a CA in our domain environment and have a question on why Microsoft does somethings the way they do lol. I have been trying to read up on what the best practices are for going about this task, and have decided that in an ideal CA environment you should have one “offline” Root CA, and then two subordinate CAs for redundancy/issuing the certs. That is all good, I understand how this works and why, but in messing with a sandbox I have setup, the way you go about adding certificate authorities to a domain environment seems extremely trivial and against all of their best practices… Dooes anyone know what the purpose is of an Enterprise Root CA that is integrated into Active Directory? From what I have read, once you setup an Enterprise Root CA that is integrated into Active Directory, it stays with Active Directory for the long haul and must not be turned off/renamed/touched under any circumstances. If this is true, that seems to go against the practice of setting up a standalone root CA, adding the subordinates, and then taking the root offline. Thanks for any feedback you may have to offer!

    Read the article

  • Windows roaming profile when creating a new user profile

    - by molecule
    When a particular user is having a lot of problems with Windows XP e.g. applications crashing, unresponsive applications (which used to work), and as a general troubleshooting practice for a domain user, I normally rename that user's old profile and get him/her to logon to create a "fresh" profile (on the same PC). More often than not, this will solve the problem albeit some reconfiguration i.e. Outlook, Excel add-ins etc. As I took over the systems admin role from another administrator, I would like to know what is the easiest way to find out (either through a third party or some Windows administrative tool) what settings are carried over if the profile is a Roaming Profile. I tested creating a new user profile for one of my users and it seems basic Outlook settings such as the user's mailbox and PSTs are carried over automatically when I create a new user profile. I suspect this is done through a batch file loaded as part of the login script. However, my knowledge of scripting is limited and I don't want any corruptions to be carried over to the new profile. Can someone share their experiences on this? Thanks in advance.

    Read the article

  • If spaces in filenames are possible, why do some of us still avoid using them?

    - by Chris W. Rea
    Somebody I know expressed irritation today regarding those of us who tend not to use spaces in our filenames, e.g. NamingThingsLikeThis.txt -- despite most modern operating systems supporting spaces in filenames. Non-technical people must look at filenames created by geeks and wonder where we learned English. So, what are the reasons that spaces in filenames are avoided or discouraged? The most obvious reason I could think of, and why I typically avoid it, are the extra quotes required on the command line when dealing with such files. Are there any other significant reasons, other than the practice being a vestigial preference? UPDATE: Thanks for all your answers! I'm surprised how popular this was. So, here's a summary: Six Reasons Why Geeks Prefer Filenames Without Spaces In Them It's irritating to put quotes around them when referenced on the command line (or elsewhere.) Some older operating systems didn't used to support them and us old dogs are used to that. Some tools still don't support spaces in filenames at all or very well. (But they should.) It's irritating to escape spaces when used where spaces must be escaped, such as URLs. Certain unenlightened services (e.g. file hosting, webmail) remove or replace spaces anyway! Names without spaces can be shorter, which is sometimes desirable as paths are limited.

    Read the article

  • RSA keys - virtual hosts

    - by Bosworth99
    Pardon my noobness, but I just got started with VPS (linux) hosting; setting up passwordless ssh for multiple users has proved to be kind of a pain. Currently I'm the single user of this ubuntu 10.04 LTS VPS (linode.com). I was able to establish a single rsa passkey under my home/user/.ssh/authorized_keys location. Fine. PuTTy works as expected, and Filezilla (sftp) links up as required. I've been working on a single site that this user owns, and thats not been a problem. Now, I want to set up some other sites, and I've chosen Webmin with the VirtualMin plugin to make this work. I made another user (or, rather, virtualmin did), but I've been unable to get FileZilla to link up to this new user. Could anyone with experience here explain what the setup is supposed to look like? IE - can I use a single rsa key pair for all accounts (if, for example, I give ownership of files to the original user?). Or is it standard practice to create a separate key pair for each user, and establish a separate putty/filezilla login for each? I've spent enough time dinking around with this to be frustrated. "Sever rejected the provided key" error sucks after the fifth hour. I'm about to set up an ftp server and call it a day. Any thoughts would be most welcome -

    Read the article

  • A separate user for each task?

    - by Mark Tomlin
    I just got a VPS sver the other day, I'm new to server administration, but not that new to Ubuntu (11.04). I use it in my living room as the HTPC, and I had a previous VPS that I used on and off for a team speak server. This one I'm setting up for long term use. So I would like to know the best practice when it comes to websites and tasks that I have the server proforming. I understand that it could be beneficial to separate each website into it's own usergroup or under its own username. I would setup nginx so that it could read all of the users directors (and thus each website) but could not touch anything else. The same with the TeamSpeak, should I make a user for TeamSpeak so that it operates within its own confined area or is this overkill? I do have access to root on the sever and my current plan is to run about 4 websites and a TeamSpeak server. My stack is Linux (Ubuntu 11.04 LTS), nginx, and PHP 5.4.3 (using the PDO SQLite 3 built in driver for the database). Should PHP have it's own user group or is it ok to place it in with nginx?

    Read the article

  • how does svn work with apache?

    - by ajsie
    i've got ubuntu installed with lamp. im using webdav to upload/download files to/from the ubuntu web server, after i have edited the php source files in netbeans. however, i wonder what is best practice for editing source files and committing these changes to the new website. cause if we are 2-3 developers, i guess we have to use svn. but i have never used it before so i wonder how it works. should i install it and then select the /var/www (apaches webroot) as the repository folder? then when i check in, all the changes will apply immediately? could someone please explain following steps: how to download, edit the source files, upload the files and see the new changes in the website. cause i have only worked with a local apache before, and it was only me. now there will be some more programmers so i have to set up a decent, central environment for this, and have to know how netbeans, svn, webdav and apache works all together. thanks!

    Read the article

  • Windows roaming profile when creating a new user profile

    - by molecule
    Hi all, When a particular user is having a lot of problems with Windows XP e.g. applications crashing, unresponsive applications (which used to work), and as a general troubleshooting practice for a domain user, I normally rename that user's old profile and get him/her to logon to create a "fresh" profile (on the same PC). More often than not, this will solve the problem albeit some reconfiguration i.e. Outlook, Excel add-ins etc. As I took over the systems admin role from another administrator, I would like to know what is the easiest way to find out (either through a third party or some Windows administrative tool) what settings are carried over if the profile is a Roaming Profile. I tested creating a new user profile for one of my users and it seems basic Outlook settings such as the user's mailbox and PSTs are carried over automatically when I create a new user profile. I suspect this is done through a batch file loaded as part of the login script. However, my knowledge of scripting is limited and I don't want any corruptions to be carried over to the new profile. Can someone share their experiences on this? Thanks in advance.

    Read the article

  • What can lead to a zone memory exhaustion and how Nginx reacts to it?

    - by Miles Hughes
    What is a possible scenario for exhausting the memory designated to a connection zone with limit_conn_zone directive and what are the implication in this case? Suppose I have this in my configuration: http { limit_conn_zone $binary_remote_addr zone=connzone:1m; ... server { limit_conn connzone 5; which, according to the documentation, allocates 16000 states for connzone on a 64-bit server. It also says that If the storage for a zone is exhausted, the server will return error 503 (Service Temporarily Unavailable) to all further requests. Well, Ok. But what does it mean on practice? When does this happen? Who receives those 503s? Does it mean that if the number of IPs somehow associated with connzone hits 16000 everyone gets a 503 and it's all over? How does Nginx decide? The documentation is weirdly vague on this. So, considering the example config, who would actually get a 503 and under which circumstances and how would things go from there? Same with request zones?

    Read the article

< Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >