Search Results

Search found 6394 results on 256 pages for 'regular expressions'.

Page 151/256 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • set a global environment variable in linux that sticks when going root

    - by Scott
    When I SSH into a linux box, I want to have the /etc/profile file save the results of the whoami command to a global environment variable. if I were to go root with the command sudo su -, I do not want that command to run again when gonig root, I want it to stick with the result of whoami as my regular username from before I went root, and I need to access that variable as the root user even though it will run the /etc/profile file again when I go root. What can I do to only run that command once in the /etc/profile command?

    Read the article

  • Triple head DVI KVM

    - by cat pants
    I am trying to run a Linux desktop and Windows desktop simultaneously with a KVM for maximum productivity. Also running three monitors. I need a KVM that can do 3x DVI @ 1920x1080 + 2x usb (mouse and keyboard) + toslink for two machines. What would you suggest? I ask on here because I have searched quite a bit and have yet to find a KVM with these requirements. (I would be open to something like modifying 3x regular KVMs to control them with one button or similar) Thanks! (I tried posting this question earlier, but it was closed as being "not related to computer hardware or software" ... is a KVM not related to computer hardware or software? I'm pretty sure it is. kvm-switch has been tagged 100 times on here for example.)

    Read the article

  • Controlling access to my API using SSH public key (not SSL)

    - by tharrison
    I have the challenge of implementing an API to be consumed by relatively non-technical clients -- pasting some sample code into their WordPress or homegrown PHP site is probably as much as we can ask. Asking them to install SSL on their servers ain't happening. So I am seeking a simple yet secure way to authenticate API clients. OAuth is the obvious solution, but I don't think it passes the "simple" test. Adding a client id and hashed secret as a parameter to the requests is closer -- it's not hard to do md5($secret . $client_id) or whatever the php would be. It seems to me that if client requests could use the same approach as SSH public keys (client gives us a key from their server(s) there should be some existing magic to make all of the subsequent transactions transparently work just as regular HTTP API requests. I am still working this out (obviously :-), so if I am being an idiot, it would be nice to know why. Thanks!

    Read the article

  • How to report spam to blacklists

    - by hayalci
    Is there a central place to report spam to various blacklists ? I regularly report to spamcop, but I do not see the addresses I reported as listed. (I guess nobody else bothers with my regular spammer. After being frustrated with bayes and spammcop, I blocked its /24 subnet) Spamcop is only a single service. I want to make the spammer known to a large number of services, and hopefully blocked by many of them. I looked for some other blacklists to report to, but the ones I looked do not consider user submissions (or they hide it well)

    Read the article

  • SSH logins failing before success

    - by Vincent
    I am running Ubuntu 12.04 Server, updated, to run a webserver on Tomcat 7. I have about 1000 clients that are very very often using an RSYNC program to sync some file with this server. Those RSync are using SSH with a certain user to open connections on the server. The result is that my server is, as normal, full of connections by the same user. About 5 connections per 1 second every day any time. Then, when I try to open a regular SSH connection with my Putty client, the connection fails before login saying "Server unexpectedly closed network connection", about 6 times for 10 attemps, anbd for 4 attemps out of 10, it works normally and I am able to login as any user. Is there a overload of connections here? The server statistics are very calm saying less then 40% of network usage and less of 2% CPU. How can I improve this? Thank you for any help. V.

    Read the article

  • Windows VPS/Cloud Host Recommendations?

    - by user18937
    As Hosting.com are no longer offerring Windows VPS accounts, we are looking for a new US based provider. Looking for something that offers standard Windows IIS hosting for a Dotnetnuke portal site with SQL Server Express or Web. Basic managed services for OS updates as well as regular backups required. Good level of support and solid uptime is critical. Budget in the $150-$200 month range but flexible depeding on quality and services offerred. Anyone have any suggestions and good feedback they can share? Currently looking at Jodohost as an option but would like some other possibilities as their support can be suspect at times we have found. A cloud solution in same range would also be an option.

    Read the article

  • Can an SSD notify the hosting OS that its wear level is getting high?

    - by Tony_Henrich
    I read a lot about SSDs and I am interested in them for server use. My biggest concern is their reliability. A lot of writes shortens their life span. I can mitigate this problem if I can run some kind of diagnostics on a regular basis on the SSD or if the SSD can automatically warn the OS that its reliability is reaching a critical level. Think of this as S.M.A.R.T or software like SpinRite for SSDs. Does anything I mentioned exist now? Which kind/brand of SSD does this? I don't mind swapping out a tired SSD for a newer one once a while. I am pretty sure that SSDs life is calculated in years and not in few months? For me, the improved performance will pay for the SSD over and over. I am planning to use plenty of RAM as well.

    Read the article

  • Touchscreen on KDE and Ubuntu?

    - by The Quantum Physicist
    I just bought a Lenovo Yoga 2 Pro... I liked the activity of the touchscreen on Windows, and it makes sense as it does on my smart phone. However, I'm not a regular windows user, so I installed Kubuntu 14.04, and everything looks fine, except that the activity of the touchscreen is so silly that it's useless. Why? Because all the touchscreen does is a single mouse with left click. For example, if I touch the screen for a relatively long time, I don't get the effect of a right click. How do I configure the touchscreen properly to get the activity expected on Ubuntu and KDE? Thanks for any efforts.

    Read the article

  • How many xml http requests is too much for a pc to handle?

    - by Uri
    I'm running mediawiki on an apache on a regular pc running vista (don't know the specific specs, but I'm guessing at least duo core 2 2 giga hertz processor, broadband connection (500 kb/s at least, probably 1 mega). I want to use the MediaWiki api to send a lot of requests to this server. Most of the time the requests will be sent through LAN (but sometimes through the internet). I'm talking thousands of requests every few seconds at worst case. (A lot of these requests may repeat themselves, I guess some sort of cache would help) Will the server handle this, or do I need a stronger/dedicated computer? (I'm not looking for specific yes/no, but just want to get an idea as to what configuration of computer will support how many request per second) Thanks

    Read the article

  • Can you see something wrong in my working .htaccess?

    - by AlexV
    OK, after many search, trial and errors I've managed to create an .htaccess that do what I wanted (see explanations and questions after the code block): <IfModule mod_rewrite.c> RewriteEngine On #1 If the requested file is not url-mapper.php (to avoid .htaccess loop) RewriteCond %{REQUEST_FILENAME} (?<!url-mapper\.php)$ #2 If the requested URI does not end with an extension OR if the URI ends with .php* RewriteCond %{REQUEST_URI} !\.(.*) [OR] RewriteCond %{REQUEST_URI} \.php.*$ [NC] #3 If the requested URI is not in an excluded location RewriteCond %{REQUEST_URI} !^/seo-urls\/(excluded1|excluded2)(/.*)?$ #Then serve the URI via the mapper RewriteRule .* /seo-urls/url-mapper.php?uri=%{REQUEST_URI} [L,QSA] </IfModule> This is what the .htaccess should do: #1 is checking that the file requested is not url-mapper.php (to avoid infinite redirect loops). This file will always be at the root of the domain. #2 the .htaccess must only catch URLs that don't end with an extension (www.foo.com -- catch | www.foo.com/catch-me -- catch | www.foo.com/dont-catch.me -- don't catch) and URLs ending with .php* files (.php, .php4, .php5, .php123...). #3 some directories (and childs) can be excluded from the .htaccess (in this case /seo-urls/excluded1 and /seo-urls/excluded2). Finally the .htaccess feed the mapper with an hidden GET parameter named uri containing the requested uri. Even if I tested and everything works, I want to know if what I do is correct (and if it's the "best" way to do it). I've learned a lot with this "project" but I still consider myself a beginner at .htaccess and regular expressions so I want to triple check it there before putting it in production...

    Read the article

  • Set up homeserver with single IP to host multiple sites on Ubuntu [closed]

    - by Ortix92
    I am trying to set up my homeserver so it can function as a regular server one would rent. I am running Ubuntu 12.04 LTS with openpanel. I have a single static IP address. I am used to having two addresses and pointing them to NS1.domain.tld and NS2.domain.tld and setting up the propper DNS records. I would also like to mention I am somewhat new to DNS zones. Either way, how would I go about setting this up correctly (in openpanel) with just a single IP address if possible at all? I have also read about free solutions online, but I would like to keep everything secure and private so other people can't peer into my data somehow. Thanks!

    Read the article

  • Sync two external harddrives?

    - by acidzombie24
    A little mishap happened earlier today and i am thinking i should have a copy of my external harddrive since 10% of it is very valuable. What is the best solution to keep two external harddrive in sync? i'll probably use one as regular and maybe use the other only to copy data. The easiest way to keep it in sync is to clear one drive and copy the other but 1T of data will take a long time. Whats a good existing app that will keep them in sync? freeware preferred.

    Read the article

  • RAID 1 in ubuntu 12.04

    - by Bavly Hanna
    Right now I have a small file server to which I have loaded ubuntu 12.04 desktop on a small 160gb harddrive. This harddrive is the primary drive from which the OS boots. I want to move all my data to my file server so it can be shared on the network (contain in 2x2TB harddrives in my desktop. The 2TB drivers are in raid 1 (hardware). I simply want to move them to the file server and set it up so that they are in software raid 1. If at all possible I'd like to be able to do this without losing any info on the drives. I've searched around and the guides i find describe raiding drives for boot driver, but these wouldn't be boot drives just regular storage drives. If someone could tell me how to perform this or point me in the right direction it would be much appreciated.

    Read the article

  • DNS lookup fails when forwarding to subdomain

    - by Kitaro
    In order to migrate to a new mailserver with little dns problems/downtime, I have set up a second postfix that is currently accessible on a subdomain mx record, eg. the main postfix accepts mail for [email protected] while the second postfix also accepts mail for [email protected]. I added a forwarding rule to postfix saying that postfix should forward mail destined for [email protected] to [email protected] (for regular local delivery) and to [email protected]. Local delivery still works as expected, but when trying forward the mail to the new mx, postfix appeds the domain part at the end of the forwarding address, resulting in [email protected], which of course fails and the mail bounces. Why does postfix mess with the alias name in that way and how can I turn that of?

    Read the article

  • Why are browsers so heavy?

    - by Kaivosukeltaja
    Back in 1998 I had a computer with 233MHz Pentium MMX CPU and a GFX card with no 3D acceleration. It was able to run games like Quake II at a decent FPS rate. My current computer has tons more performance and a mid-class GPU, yet struggles to reach 20 FPS when rendering a single model inside a skybox with WebGL. Even regular pages with lots of 2D CSS animations bring many modern computers to their metaphorical knees. As a web developer I understand there's a lot going on in a web page but not what makes it that heavy. Modern browsers compile JavaScript to CPU native machine code before running it and rendering into a canvas element shouldn't trigger DOM rebuilds so theoretically it should be a lot faster than it is. What am I missing here and is it possible to avoid or minimize whatever is making the browsers slow to build more efficient websites?

    Read the article

  • Prevent acccess to the C drive

    - by Jenko
    Is it possible to prevent regular users from accessing the C drive via Windows Explorer? they should be allowed to execute certain programs. This is to ensure that employees cannot steal or copy out proprietary software even though they should be able to execute it. One way would be to change the option in windows Group Policy and set the "shell" to something other than "explorer.exe". I'm looking for a similar windows setting that just hides the C drive or otherwise prevents trivial access. This is for Windows XP/7.

    Read the article

  • Running a cronjob

    - by Ed01
    've been puzzling over cronjobs for the last few hours. I've read documentation and examples. I understand the basics and concepts, but haven't gotten anything to work. So I would appreciate some help with this total noob dilemma. The ultimate goal is to schedule the execution of a django function every day. Before I get that far, I want to know that I can schedule any old script to run, first once, then on a regular basis. So I want to: 1) Write a simple script (perhaps a bash script) that will allow me to determine that yes, it did indeed run successfully, or that it failed. 2) schedule this script to run at the top of the hour I tried writing a bash script that simple output some text to the terminal: #!/bin/bash echo "The script ran" Then I dropped this into a .txt file MAILTO = *****.******@gmail.com 05 * * * * /home/vadmin/development/test.sh But nothing happened. I'm sure I did many things wrong. Where do I start to fix all of this?

    Read the article

  • Is there a way to redirect certain URLs to specific web browsers in Linux?

    - by jraxxo
    I'm using Chrome as my default browser in Ubuntu 12.10. I need to use Firefox for business purposes (certain websites pertaining to my work only work with Firefox). Is there a way to force Ubuntu to use Firefox for certain types of URLs (maybe as defined by a regular expression) while maintaining Chrome as my default browser for all my other tasks? Perhaps as a shell script running in the background? I'd like this to work system-wide, covering links from Chrome itself as well as PDFs/ODTs, etc. I have searched for solutions, but I couldn't find anything besides OpenWith, a Firefox extension which adds a button to open certain links in other browsers which would again require me to open Firefox beforehand, which does not help me at all. Does anyone have any ideas? Something like Choosy for Linux?

    Read the article

  • mac os x default editor for .dotsystemfiles

    - by jasonkuhrt
    This is a seemingly simple question but I can't find an answer so far despite search quite a bit. I'd like that when I open a .dotsystemfile in finder (i.e. .htaccess or .vimrc) it opens in a different editor than textedit. Doing the regular change-all in the info panel won't do the trick as it gives the following error: " An error occurred while changing the application that opens “.vimrc” because not enough information is available. Do you want to open “.vimrc” with “MacVim.app”? " This isn't a huge issue but it is like a small splinter that I've love removed. Thanks for any helpful information.

    Read the article

  • Can I use applescript to click buttons in background?

    - by Giorgio
    Sorry for this generic and probably bad-written question. I've never programmed in applescript, but I'm quite familiar with other coding language. I'm in the need of clicking on 2 sequential button inside the lobby of a software (when you click the first a popup appears and we should click 'ok'). However things are a little bit more complicated then this because: 1) the lobby of this program isn't in foreground: it's covered by other windows opened. (I don't have experience so I don't know if this represent a problem). 2) there should be a timer and the program should click this button at regular intervals. Is this feasible with applescript?

    Read the article

  • HP DL380 Losing Drive Array

    - by jidl
    I have an HP Proliant DG380 G7 dropping one of it's arrays every hour, on the hour, for 2-5 minutes. The OS is SBS 2011 Standard, the servers runs Exchange, DC, files & Trend WFBS 8. I can watch the D Drive disappear for the duration of the problem - then it just comes back up and all is well again. There is no loss of network connectivity, although the mapped drives also disappear. We thought it might be to do with Sharepoint / VSS writers failing but it looks as though this is a symptom rather than cause. It survives a reboot. Any ideas as to what could be running on a regular schedule like this?

    Read the article

  • SQL SERVER – Shrinking NDF and MDF Files – Readers’ Opinion

    - by pinaldave
    Previously, I had written a blog post about SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation. After that, I have written the following blog post that talks about the advantage and disadvantage of Shrinking and why one should not be Shrinking a file SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008. On this subject, SQL Server Expert Imran Mohammed left an excellent comment. I just feel that his comment is worth a big article itself. For everybody to read his wonderful explanation, I am posting this blog post here. Thanks Imran! Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. Comments from Imran - Before I explain to you the concept of Shrink Database, let us understand the concept of Database Files. When we create a new database inside the SQL Server, it is typical that SQl Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension. .MDF is called as Primary Data File. .LDF is called as Transactional Log file. If you add one or more data files to a database, the physical file that will be created in the Operating System will have an extension of .NDF, which is called as Secondary Data File; whereas, when you add one or more log files to a database, the physical file that will be created in the Operating System will have the same extension as .LDF. The questions now are, “Why does a new data file have a different extension (.NDF)?”, “Why is it called as a secondary data file?” and, “Why is .MDF file called as a primary data file?” Answers: Note: The following explanation is based on my limited knowledge of SQL Server, so experts please do comment. A data file with a .MDF extension is called a Primary Data File, and the reason behind it is that it contains Database Catalogs. Catalogs mean Meta Data. Meta Data is “Data about Data”. An example for Meta Data includes system objects that store information about other objects, except the data stored by the users. sysobjects stores information about all objects in that database. sysindexes stores information about all indexes and rows of every table in that database. syscolumns stores information about all columns that each table has in that database. sysusers stores how many users that database has. Although Meta Data stores information about other objects, it is not the transactional data that a user enters; rather, it’s a system data about the data. Because Primary Data File (.MDF) contains important information about the database, it is treated as a special file. It is given the name Primary Data file because it contains the Database Catalogs. This file is present in the Primary File Group. You can always create additional objects (Tables, indexes etc.) in the Primary data file (This file is present in the Primary File group), by mentioning that you want to create this object under the Primary File Group. Any additional data file that you add to the database will have only transactional data but no Meta Data, so that’s why it is called as the Secondary Data File. It is given the extension name .NDF so that the user can easily identify whether a specific data file is a Primary Data File or a Secondary Data File(s). There are many advantages of storing data in different files that are under different file groups. You can put your read only in the tables in one file (file group) and read-write tables in another file (file group) and take a backup of only the file group that has read the write data, so that you can avoid taking the backup of a read-only data that cannot be altered. Creating additional files in different physical hard disks also improves I/O performance. A real-time scenario where we use Files could be this one: Let’s say you have created a database called MYDB in the D-Drive which has a 50 GB space. You also have 1 Database File (.MDF) and 1 Log File on D-Drive and suppose that all of that 50 GB space has been used up and you do not have any free space left but you still want to add an additional space to the database. One easy option would be to add one more physical hard disk to the server, add new data file to MYDB database and create this new data file in a new hard disk then move some of the objects from one file to another, and put the file group under which you added new file as default File group, so that any new object that is created gets into the new files, unless specified. Now that we got a basic idea of what data files are, what type of data they store and why they are named the way they are, let’s move on to the next topic, Shrinking. First of all, I disagree with the Microsoft terminology for naming this feature as “Shrinking”. Shrinking, in regular terms, means to reduce the size of a file by means of compressing it. BUT in SQL Server, Shrinking DOES NOT mean compressing. Shrinking in SQL Server means to remove an empty space from database files and release the empty space either to the Operating System or to SQL Server. Let’s examine this through an example. Let’s say you have a database “MYDB” with a size of 50 GB that has a free space of about 20 GB, which means 30GB in the database is filled with data and the 20 GB of space is free in the database because it is not currently utilized by the SQL Server (Database); it is reserved and not yet in use. If you choose to shrink the database and to release an empty space to Operating System, and MIND YOU, you can only shrink the database size to 30 GB (in our example). You cannot shrink the database to a size less than what is filled with data. So, if you have a database that is full and has no empty space in the data file and log file (you don’t have an extra disk space to set Auto growth option ON), YOU CANNOT issue the SHRINK Database/File command, because of two reasons: There is no empty space to be released because the Shrink command does not compress the database; it only removes the empty space from the database files and there is no empty space. Remember, the Shrink command is a logged operation. When we perform the Shrink operation, this information is logged in the log file. If there is no empty space in the log file, SQL Server cannot write to the log file and you cannot shrink a database. Now answering your questions: (1) Q: What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ? A: According to Books Online (For SQL Server 2000): UsedPages: the number of 8-KB pages currently used by the file. EstimatedPages: the number of 8-KB pages that SQL Server estimates the file could be shrunk down to. Important Note: Before asking any question, make sure you go through Books Online or search on the Google once. The reasons for doing so have many advantages: 1. If someone else already has had this question before, chances that it is already answered are more than 50 %. 2. This reduces your waiting time for the answer. (2) Q: What is the difference between Shrinking the Database using DBCC command like the one above & shrinking it from the Enterprise Manager Console by Right-Clicking the database, going to TASKS & then selecting SHRINK Option, on a SQL Server 2000 environment? A: As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console won’t be freezed. You can do perform your regular activities using Enterprise Manager. (3) Q: What is this .NDF file that is discussed above? I have never heard of it. What is it used for? Is it used by end-users, DBAs or the SERVER/SYSTEM itself? A: .NDF File is a secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or however your model database has been setup, because a model database is a template used every time you create a new database using the CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by the SQL Server to store data which are saved by the users. Hope this information helps. I would like to as the experts to please comment if what I understand is not what the Microsoft guys meant. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Parallelism in .NET – Introduction

    - by Reed
    Parallel programming is something that every professional developer should understand, but is rarely discussed or taught in detail in a formal manner.  Software users are no longer content with applications that lock up the user interface regularly, or take large amounts of time to process data unnecessarily.  Modern development requires the use of parallelism.  There is no longer any excuses for us as developers. Learning to write parallel software is challenging.  It requires more than reading that one chapter on parallelism in our programming language book of choice… Today’s systems are no longer getting faster with each generation; in many cases, newer computers are actually slower than previous generation systems.  Modern hardware is shifting towards conservation of power, with processing scalability coming from having multiple computer cores, not faster and faster CPUs.  Our CPU frequencies no longer double on a regular basis, but Moore’s Law is still holding strong.  Now, however, instead of scaling transistors in order to make processors faster, hardware manufacturers are scaling the transistors in order to add more discrete hardware processing threads to the system. This changes how we should think about software.  In order to take advantage of modern systems, we need to redesign and rewrite our algorithms to work in parallel.  As with any design domain, it helps tremendously to have a common language, as well as a common set of patterns and tools. For .NET developers, this is an exciting time for parallel programming.  Version 4 of the .NET Framework is adding the Task Parallel Library.  This has been back-ported to .NET 3.5sp1 as part of the Reactive Extensions for .NET, and is available for use today in both .NET 3.5 and .NET 4.0 beta. In order to fully utilize the Task Parallel Library and parallelism, both in .NET 4 and previous versions, we need to understand the proper terminology.  For this series, I will provide an introduction to some of the basic concepts in parallelism, and relate them to the tools available in .NET.

    Read the article

  • Go Directly to Desktop Mode in Windows 8 on Login (Without Installing Extra Software)

    - by Asian Angel
    A lot of people are unhappy with being forced to interact with the new Start Screen in Windows 8 first thing once they have logged into their system. But their is a quick and simple work-around to go directly to Desktop Mode that does not require installing extra software or making changes to your system. The first thing that you will need to do is make sure that the Desktop Tile is in the left uppermost position on the Start Screen as seen here. Once the tile has been moved to that position you will need to restart/reboot your system. Once your system has restarted and you are back at the Login Screen, type in your password but do NOT click on the Arrow Button or tap the Enter Key. Instead of tapping the Enter Key simply press down on it and hold it down until you see the regular desktop. Keep in mind that you may see the Start Screen become visible for just a short moment as it is being bypassed for the desktop. How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8 How to Banish Duplicate Photos with VisiPic

    Read the article

  • Excel-based Performance Reviews transformed into Web Application for Performance Management

    - by Webgui
    HR TMS provides enterprise talent management solutions for healthcare, retail and corporate customers, focusing on performance management, compensation management and succession planning. As the competency of nurses and other healthcare workers is critical, the government, via the Joint Commission (JCAHO), tightly monitors their performances. On a regular basis, accredited healthcare organizations are required to review employee performance using a complex set of position dependent job descriptions and competencies. Middlesex Hospital managed their performance reviews for 2500 employees manually with Excel spreadsheets. This was a labor intensive process that proved to be error prone and difficult to manage. Reviews were not always where they belonged and the job descriptions and competencies for healthcare workers were difficult to keep accurate and up to date. As a result, when the Joint Commission visited and requested to see specific review documentation, there was intense stress. Middlesex Hospital needed to automate their review process, pull in the position information from those spreadsheets and be able to deliver reviews online. Users needed to have online access to those reviews from a standard browser. Although the manual system had its issues, it did have the advantage of being very comprehensive and familiar to users. The decision was made to provide a web-based solution that leveraged the look and feel of those spreadsheets in order to insure user acceptance of the system and minimize the training needed. Read the full article here >

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >