Search Results

Search found 31207 results on 1249 pages for 'atg best practice in industries'.

Page 169/1249 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • Best way to provide redundant switching/links to server

    - by Myles Gray
    We have 3x ESX hosts and 2x SANS that we wish to move to a redundant 10G networking infrastructure. We have 4x Dell PowerConnect 8024F's to provide our backbone and are configured as so (only core switches relevant to this question): So the questions are: 1) Do the interconnects between the 4x 8024F's need to be LAG'd or just STP'd 2) As the NICs on the servers are split across 2 switches, does any special configuration need to be done here or on the switches? 3) If a link or switch fails will the switches automatically find a new path to the Server/SAN?

    Read the article

  • Best video codec to store my own collection

    - by Jack
    Hello! I think this question has already been asked but with different flavours. My problem resised in the fact that my camera (Canon G9) creates video with almost raw codec (I think it's plain old MPEG) so a 10 minutes video is almost 900mb. I would like to convert them in a format that has a good trade-off between space and quality, but I would prefer having the quality as good as the original (of course this is not possible because of lossy compression) just saving as much space is possible with a minimal lose of quality. Which codec should I look for? H264? It seems to be the champion of the moment.. otherwise which other ones could I try? XviD? Which parameters should I use? I mean how many kbits/s is a fair good bitrate to keep high quality? And what about audio codec? video specs are 640x480 at 30fps or 1024x768 at 15fps.. thanks in advance!

    Read the article

  • Best way to script checking whether a machine is on the corporate network

    - by Ben
    I am writing a Powershell script to determine if a machine is on the corporate network. The machine may or may not be on the domain, so I want to check at "IP" level. Have written something to check by pinging a couple of servers on a couple of different subnets (to get around the risk of someone being on another (external) subnet with a host on the same IP.) Works, but it's a bit slow, and not especially "future-proof" - e.g. in 2 years time when I decomission the server it'll break. Is there a way I can use the dns suffix being given by the local dhcp server? Just direct me what I need to check - I can figure out the script. Ta, Ben

    Read the article

  • best way to add and delete text lines with jquery product configurator

    - by Daniel White
    I am creating a product configurator with Jquery. My users can add custom text lines to their product. So you could create say... 4 text lines with custom text. I need to know what the best way to add and delete these lines would be. Currently I have the following code for adding lines... //Add Text Button $('a#addText').live('click', function(event) { event.preventDefault(); //Scroll up the text editor $('.textOptions').slideUp(); $('#customText').val(''); //count how many items are in the ul textList var textItems = $('ul#textList li').size(); var nextNumber = textItems + 1; if(textItems <= 5) { //Change input to reflect current text being changed $('input#currentTextNumber').val(nextNumber); //Append a UL Item to the textList $('ul#textList').append('<li id="textItem'+nextNumber+'">Text Line. +$5.00 <a class="deleteTextItem" href="'+nextNumber+'">Delete</a></li>'); //Scroll down the text editor $('.textOptions').slideDown(); }else { alert('you can have a maximum of 6 textual inputs!'); } }); I'm probably not doing this the best way, but basically i have an empty UL list to start with. So when they click "Add Text Line" it finds out how many list elements are in the unordered list, adds a value of 1 to that and places a new list element with the id TextItem1 or TextItem2 or whatever number we're on. The problem i'm running into is that when you click delete item, it screws everything up because when you add an item again all the numbers aren't correct. I thought about writing some kind of logic that says all the numbers above the one you want deleted get 1 subtracted from their value and all the numbers below stay the same. But I think i'm just going about this the wrong way. Any suggestions on the easiest way to add and delete these text lines is appreciated.

    Read the article

  • Best Timing for Windows AD Domain Name Change

    - by Cliff Racer
    A while back when I first started with my company, the domain had already been set up using a "xxx.net" DNS name for the internal AD namespace. The shortname is just fine and I feel no need to change it but I have always hated how we used an internet DNS name for our internal AD. We are planning an AD upgrade from 2003 to 2008R2 and I would like to work this DNS name change if possible. I know there are procedures for doing a full domain name change but my question is: Is a FULL domain name change neccessary if all I want to change is the internal DNS name of the domain? Would it be better to do this change after the 2008R2 domain upgrade?

    Read the article

  • Best filesystem choices for NFS storing VMware disk images

    - by mlambie
    Currently we use an iSCSI SAN as storage for several VMware ESXi servers. I am investigating the use of an NFS target on a Linux server for additional virtual machines. I am also open to the idea of using an alternative operating system (like OpenSolaris) if it will provide significant advantages. What Linux-based filesystem favours very large contiguous files (like VMware's disk images)? Alternatively, how have people found ZFS on OpenSolaris for this kind of workload?

    Read the article

  • Best way to handle Many-to-Many relationships in PHP MySQL

    - by Jayrox
    I am looking for the best way to handle a database of many-to-many relationships in PHP and MySQL. Right now I have 2 tables: Users (id, user_name, first_name, last_name) Connections (id_1, id_2) In the User table id is auto incremented on add and user_name is unique, but can be changed. Unfortunately, I don't have control over the user_name and its ability to be changed, but I must account for it. The Connections table is obviously, user1 and user2's id. The connection table needs to account for these possible relations: user1 --> user2 (user 1 friends with user 2 but not user2 friends with user1) user2 --> user1 (user 2 friends with user 1 but not user1 friends with user2) user1 <--> user2 (user 1 and user 2 mutually friends) user1 <-!-> user2 (user 1 and user 2 not friends) That part is not the problem, The problem I am having with is keeping these relations unique when and if they change in batches. Possible solution 1: delete all of user 1's relations and readd them with the updated list. I think this might be too slow for my needs. Solution 2? Anyone else encounter this problem? How should I best handle this? update: distinguishing relationships: i handle relationships like this: user1, user2 user1, user3 user2, user1 in that example the following is true: user1 follows user2 and user3 user2 only follows user1 but doesn't follow user3 user3 doesn't follow either user1 or user2

    Read the article

  • Best HTPC solution that supports MKV

    - by wag2639
    So it seems like there are many solutions (Boxee, GoogleTV, AppleTV, XMBC, etc) out there for HTPC setups but I can't seem to find a good solution for playing MKV files and I would say the bulk of my video library is encoded in MKVs. Whats a good solution for an HTPC if I want the following: NAS / Samba support to use files from server or Windows share Preferably Linux based solution Cleanish UX Nice to have but not a requirement: - Android Remote App so I can control with my Android device

    Read the article

  • what the best simple video editing program? [closed]

    - by Itay
    Possible Duplicate: What is the easiest video editing program to use on Windows hi, i want to edit videos, i'm looking for a good, premiere like software. but it seems premiere it self is too much for me... frankly i can see how those settings can be helpful for any one.. but it doesn't matter. i've just imported an HD video from my new camera (CS4), first, it cut most of the video, and second, it actually made the quality worse... i've tried a few free, programs, but as soon as i've seen the GUI of those, i closed them... i really like premier, but i honestly don't understand who needs all this PAL/DV stuff... is there any similar application that intend to more intermediate users? tahnks.

    Read the article

  • Understanding MongoDB(and NoSQL in general) and How to make the best use of it

    - by Earlz
    Hello, I am beginning to think that my next project I am wanting to do would work better with a NoSQL solution. The project would either involve a ton of 2-column tables or a ton of dynamic queries with dynamically generated columns in a traditional SQL database. So I feel a NoSQL database would be much cleaner. I'm looking at MongoDB and it looks pretty promising. Anyway, I'm attempting to make sense of it all. Also, I will be using MongoMapper in Ruby. Anyway though, I'm confused as to how to layout things in such a freeform database. I've read http://stackoverflow.com/questions/2170152/nosql-best-practices and the answer there says that normalization is usually bad in a NoSQL DB. So how would be the best way of laying out say a simple blog with users, posts, and comments? My natural thought was to have 3 collections for each and then link them by a unique ID. But this apparently is wrong? So, what are some of the ways to lay out such a thing? My concern with the answer given in the other question is what if the author's name changed. You'd have to go through updating a ton of posts and comments. But is this an ok thing to do with NoSQL?

    Read the article

  • Understanding MongoDB (and NoSQL in general) and how to make the best use of it

    - by Earlz
    Hello, I am beginning to think that my next project I am wanting to do would work better with a NoSQL solution. The project would either involve a ton of 2-column tables or a ton of dynamic queries with dynamically generated columns in a traditional SQL database. So I feel a NoSQL database would be much cleaner. I'm looking at MongoDB and it looks pretty promising. Anyway, I'm attempting to make sense of it all. Also, I will be using MongoMapper in Ruby. Anyway though, I'm confused as to how to layout things in such a freeform database. I've read http://stackoverflow.com/questions/2170152/nosql-best-practices and the answer there says that normalization is usually bad in a NoSQL DB. So how would be the best way of laying out say a simple blog with users, posts, and comments? My natural thought was to have three collections for each and then link them by a unique ID. But this apparently is wrong? So, what are some of the ways to lay out such a thing? My concern with the answer given in the other question is, what if the author's name changed? You'd have to go through updating a ton of posts and comments. But is this an okay thing to do with NoSQL?

    Read the article

  • Best LXDE based distro/distro that supports LXDE?

    - by Misha Koshelev
    Lubuntu is nice - but it seems the LXDE version is not as up to date as Fedora LXDE Spin or even Debian squeeze with LXDE installed... I do like Chromium on Lubuntu though... its faster and a nice touch. So, any good recommendations? I am fairly used to Ubuntu and the dpkg/apt commands, but am willing to learn. I am looking for a lightweight 64-bit distribution for my main laptop (it is by no means "old" or "low spec" but I like that Lubuntu starts up in like 2 secs). Anyway as you can see I have a strong Lubuntu bias, but there are issues like: LXDE version seems not to be recent (esp in 10.04 version which seems to work more stably for me - with Nvidia drivers etc) 64 bit install is currently a pain - requires first install of minimal CD or alternate CD both of which required wired Ethernet, then install of lubuntu from PPA. Native 64-bit support would be nice. Linux Mint LXDE, for example, is also only 32-bit. Thank you so much

    Read the article

  • What is the best SVN client for Windows?

    - by Nick
    I am familiar with using both Versions and Cornerstone on Mac - they are fantastic and incredibly simple to use... but I don't want to always rely on being able to borrow my girlfriend's MacBook so need to find an alternative for my Windows machine. Can anyone please recommend a good subversion client? I have experimented using Tortoise and RapidSVN and couldn't even get them working :( I'd like something just incredibly simple if possible. Thank you!

    Read the article

  • Best practices for thin-provisioning Linux servers (on VMware)

    - by nbr
    I have a setup of about 20 Linux machines, each with about 30-150 gigabytes of customer data. Probably the size of data will grow significantly faster on some machines than others. These are virtual machines on a VMware vSphere cluster. The disk images are stored on a SAN system. I'm trying to find a solution that would use disk space sparingly, while still allowing for easy growing of individual machines. In theory, I would just create big disks for each machine and use thin provisioning. Each disk would grow as needed. However, it seems that a 500 GB ext3 filesystem with only 50 GB of data and quite a low number of writes still easily grows the disk image to eg. 250 GB over time. Or maybe I'm doing something wrong here? (I was surprised how little I found on the subject with Google. BTW, there's even no thin-provisioning tag on serverfault.com.) Currently I'm planning to create big, thin-provisioned disks - but with a small LVM volume on them. For example: a 100 GB volume on a 500 GB disk. That way I could more easily grow the LVM volume and the filesystem size as needed, even online. Now for the actual question: Are there better ways to do this? (that is, to grow data size as needed without downtime.) Possible solutions include: Using a thin-provisioning friendly filesystem that tries to occupy the same spots over and over again, thus not growing the image size. Finding an easy method of reclaiming free space on the partition (re-thinning?) Something else? A bonus question: If I go with my current plan, would you recommend creating partitions on the disks (pvcreate /dev/sdX1 vs pvcreate /dev/sdX)? I think it's against conventions to use raw disks without partitions, but it would make it a bit easier to grow the disks, if that is ever needed. This is all just a matter of taste, right?

    Read the article

  • Best way to convert from IMAP to POP3?

    - by Brad
    At work, I connect to a corporate Exchange server via IMAP and Thunderbird 3. Over the course of a year or so, I've created quite a few folders on the server and have a lot of mail stored there. I'm hitting the storage limit of my mail account and want to convert to pulling mail down to my local box (running Linux) via POP3. I know that polling mail will only get mail in INBOX, but I'm wondering if there are solutions out there that could be used to pull mail from the other folders as well, or am I doomed to moving mail into the inbox manually and polling over and over again?

    Read the article

  • Source code versioning with comments (organizational practice) - leave or remove?

    - by ADTC
    Before you start admonishing me with "DON'T DO IT," "BAD PRACTICE!" and "Learn to use proper source code control", please hear me out first. I am fully aware that the practice of commenting out old code and leaving it there forever is very bad and I hate such practice myself. But here's the situation I'm in. A few months ago I joined a company as software developer. I had worked in the company for few months as an intern, about a year before joining recently. Our company uses source code version control (CVS) but not properly. Here's what happened both in my internship and my current permanent position. Each time I was assigned to work on a project (legacy, about 8-10 years old). Instead of creating a CVS account and letting me check out code and check in changes, a senior colleague exported the code from CVS, zipped it up and passed it to me. While this colleague checks in all changes in bulk every few weeks, our usual practice is to do fine-grained versioning in the actual source code itself (each file increments in versions independent from the rest). Whenever a change is made to a file, old code is commented out, new code entered below it, and this whole section is marked with a version number. Finally a note about the changes is placed at the top of the file in a section called Modification History. Finally the changed files are placed in a shared folder, ready and waiting for the bulk check-in. /* * Copyright notice blah blah * Some details about file (project name, file name etc) * Modification History: * Date Version Modified By Description * 2012-10-15 1.0 Joey Initial creation * 2012-10-22 1.1 Chandler Replaced old code with new code */ code .... //v1.1 start //old code new code //v1.1 end code .... Now the problem is this. In the project I'm working on, I needed to copy some new source code files from another project (new in the sense that they didn't exist in destination project before). These files have a lot of historical commented out code and comment-based versioning including usually long or very long Modification History section. Since the files are new to this project I decided to clean them up and remove unnecessary code including historical code, and start fresh at version 1.0. (I still have to continue the practice of comment-based versioning despite hating it. And don't ask why not start at version 0.1...) I have done similar something during my internship and no one said anything. My supervisor has seen the work a few times and didn't say I shouldn't do such clean-up (if at all it was noticed). But a same-level colleague saw this and said it's not recommended as it may cause downtime in the future and increase maintenance costs. An example is when changes are made in another project on the original files and these changes need to be propagated to this project. With code files drastically different, it could cause confusion to an employee doing the propagation. It makes sense to me, and is a valid point. I couldn't find any reason to do my clean-up other than the inconvenience of a ridiculously messy code. So, long story short: Given the practice in our company, should I not do such clean-up when copying new files from project to project? Is it better to make changes on the (copy of) original code with full history in comments? Or what justification can I give for doing the clean-up? PS to mods: Hope you allow this question some time even if for any reason you determine it to be unfit in SO. I apologize in advance if anything is inappropriate including tags.

    Read the article

  • Best way to review pdf documents

    - by Anders Rasmussen
    I'm looking for an easy way to get my pdf document reviewed. I would prefer an online solution, where I just upload my document and then sent out an url to my reviewers. They can then give comments through the website without any special software installed.

    Read the article

  • PHP: Best solution for links breaking in a mod_rewrite app

    - by psil
    I'm using mod rewrite to redirect all requests targeting non-existent files/directories to index.php?url=* This is surely the most common thing you do with mod_rewrite yet I have a problem: Naturally, if the page url is "mydomain.com/blog/view/1", the browser will look for images, stylesheets and relative links in the "virtual" directory "mydomain.com/blog/view/". Problem 1: Is using the base tag the best solution? I see that none of the PHP frameworks out there use the base tag, though. I'm currently having a regex replace all the relative links to point to the right path before output. Is that "okay"? Problem 2: It is possible that the server doesn't support mod_rewrite. However, all public files like images, stylesheets and the requests collector index.php are located in the directory /myapp/public. Normally mod_rewrite points all request to /public so it seems as if public was actually the root directory too all users. However if there is no mod_rewrite, I then have to point the users to /public from the root directory with a header() call. That means, however that all links are broken again because suddenly all images, etc. have to be called via /public/myimage.jpg Additional info: When there is no mod_rewrite the above request would look like this: mydomain.com/public/index.php/blog/view/1 What would be the best solutions for both problems?

    Read the article

  • what is best config for nginx worker_rlimit_nofile and worker_connections 28672

    - by Binh Nguyen
    i have issue of web-brower response ( especially on ie ) very slow, some time time out, and sometime hang out up to 20 seconds for one file redirect 301 when test with "f12 derverloper tool of ie" .. it report wait/start time very long. but after got connected the elements on web weill be dowload and show out fast ( test at xaluan.com ) It most happen when active user on web more than 2100 ( use google real time live analytic ). server running cenos 5 with ngix, apache, 32core cpu, 96G ram, raid 10 sas hdd.. == flowing is my config == user nobody; # no need for more workers in the proxy mode worker_processes 28; #old 32 #good at 24 error_log /var/log/nginx/error.log; #old add in end: info worker_rlimit_nofile 22528; events { worker_connections 22528; use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks off; sendfile on; tcp_nopush on; tcp_nodelay on; server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks off; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 25; #old 5 gzip on; #old on gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; ignore_invalid_headers on; client_header_timeout 1m; #3m client_body_timeout 1m; #3m send_timeout 1m; #3m reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 100M; client_body_buffer_size 256k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; limit_conn_zone $binary_remote_addr zone=limit_per_ip:1m; limit_conn limit_per_ip 20; limit_req_zone $binary_remote_addr zone=allips:5m rate=200r/s; limit_req zone=allips burst=200 nodelay; include "/etc/nginx/vhosts/*"; } =========== I have play around with worker config 1- tried increase as some one suggess: worker_rlimit_nofile = worker_connections = worker_processes * 1024 = 32768 2- tried to set low: worker_processes = 28 and other worker at 22582 and other solution too .. but not work cause some time it make server load hight very quick 3- tried to comment out the # worker_rlimit_nofile . so it will be unlimited. it look like solved a bit about issue response time. but it also make server high load quick in peak time... Please help thanks PS: other apache you may have look for help me out thanks Listen 0.0.0.0:8081 User nobody Group nobody ExtendedStatus On ServerAdmin [email protected] ServerName server.xaluan.com LogLevel warn # These can be set in WHM under 'Apache Global Configuration' Timeout 100 TraceEnable Off ServerSignature Off ServerTokens ProductOnly FileETag None StartServers 15 <IfModule prefork.c> MinSpareServers 20 MaxSpareServers 50 #MaxSpareServers 40 </IfModule> ServerLimit 1572 MaxClients 1572 MaxRequestsPerChild 4000 # MaxRequestsPerChild 3000 KeepAlive On KeepAliveTimeout 3 MaxKeepAliveRequests 300 #MaxKeepAliveRequests 130

    Read the article

  • best way to quickly share multiple photos without permanently hosting them

    - by dsollen
    I find that I'm often asked to share lots of photos with someone, enough that uploading each one individually to them gets tedious when I would like to drag and drop the whole bunch. I could put them on photobucket, but some of them are semi-private; private enough that I don't want them to be easily found on image hosting sites. Are there any convenient ways of sharing these photos quickly but still being able to remove them from the inter-webs afterwards (without too much hassle)? I have found Yahoo Messenger complete version has great photo sharing options; but not everyone has it and I can't expect people to download it just to see some photos.

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >