Search Results

Search found 8190 results on 328 pages for 'separate'.

Page 229/328 | < Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >

  • how is the the linux console displayed to the user and how does the user go about changing the conso

    - by Chris
    I've been searching for the last two day on trying to understand how the console displays itself to the user and how to change the console settings. I've had some luck along the way but nothing that I've found has giving me a real clear explanation of how the console is displayed or how to change or control it's display settings. Some examples that of what I'm looking for are as follows: How is the console displayed on the screen? I know with X11 it uses your graphics card driver to display graphics to the screen, but how is the consoles text mode handled? Could some one ether explain this to me or point me to an in-depth overview of it all? Is it possible to have multi-head support in console mode with separate tty's on each screen? If so how would I go about setting this up? How would you go about changing the size of the console display from the default 80x25 to a custom size? I'm testing anything I find on a debian testing build, which is just the minimal base install on a virtual box. In time I will be using this information to setup my main system which is multi-head with 3 monitors. I would like to be able to support all three displays in console mode if possible.

    Read the article

  • HP/IBM alternative to Buffalo iSCSI TerraStation?

    - by Robin Day
    I'm looking at virtualising some of our infrastructure in order to allow for more resiliance and future expandability. We have successfully virtualised on single servers with Direct Attached Storage and are now looking for a more future proof solution using a high powered host (or two) and a SAN (or two). I'm thinking that the host machine will probably be an HP ProLiant DL360 G7 (all of our exisiting infrastructure is HP). Unfortunately, I am new to the world of SANs. From what I can see, the Buffalo Terrastation III is all I would need in order to setup an iSCSI SAN for VMWare to use. However, I'm a little reticent to go that way as it's a bit too "entry level" for my liking. In particular I would be very keen for more redundancy, power, networking, etc. I'm also very aware that you "get what you pay for". Therefore, can anyone reccommend equivalents from the big boys? HP/IBM? I have searched high and low on the HP site and seen many options but am struggling to work out if it is all the hardware I will need. Some options appear to need separate controllers from disk enclosures, etc.

    Read the article

  • nginx: js file loads indifferently every refresh

    - by poymode
    I have this nginx problem wherein a js file in a rails app loads indifferently. Whenever I try to access the JS file in the browser and refresh the page, the scrollbar changes length meaning sometimes it loads half the js page, sometimes the whole and sometimes just a part of it. the js file size is 71K. my nginx server is on different server,separate from my rails app. when I try to access the js file directly through the app server, lets say 10.48.30.150:3000/javascripts/file.js it works fine and doesnt show any half-loaded page. but when I use the nginx server which upstreams the rails app, it shows the indifferent page loads. here is my nginx http conf error_log /usr/local/nginx/logs/error.log; pid /usr/local/nginx/logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 256; access_log /usr/local/nginx/logs/access.log; sendfile on; #tcp_nopush on; keepalive_timeout 0; tcp_nodelay on; #gzip on; #gzip_min_length 4096; #gzip_buffers 16 8k; #gzip_types application/x-javascript text/css text/plain; large_client_header_buffers 4 8k; client_max_body_size 2G; include /usr/local/nginx/conf.d/*.conf; }

    Read the article

  • SSH Keys Authentication keeps asking for password

    - by Rhyuk
    Im trying to set access from ServerA(SunOS) to ServerB(Some custom Linux with Keyboard Interactive login) with SSH Keys. As a proof of concept I was able to do it between 2 virtual machines. Now in my real life scenario it isnt working. I created the keys in ServerA, copied them to ServerB, chmod'd .ssh folders to 700 on both ServerA,B. Here is the log of what I get. debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: Peer sent proposed langtags, ctos: debug1: Peer sent proposed langtags, stoc: debug1: We proposed langtags, ctos: en-US debug1: We proposed langtags, stoc: en-US debug1: SSH2_MSG_KEX_DH_GEX_REQUEST sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: dh_gen_key: priv key bits set: 125/256 debug1: bits set: 1039/2048 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'XXX.XXX.XXX.XXX' is known and matches the RSA host key. debug1: Found key in /XXX/.ssh/known_hosts:1 debug1: bits set: 1061/2048 debug1: ssh_rsa_verify: signature correct debug1: newkeys: mode 1 debug1: set_newkeys: setting new keys for 'out' mode debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: newkeys: mode 0 debug1: set_newkeys: setting new keys for 'in' mode debug1: SSH2_MSG_NEWKEYS received debug1: done: ssh_kex2. debug1: send SSH2_MSG_SERVICE_REQUEST debug1: got SSH2_MSG_SERVICE_ACCEPT debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: publickey debug1: Trying private key: /XXXX/.ssh/identity debug1: Trying public key: /xxx/.ssh/id_rsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /xxx/.ssh/id_dsa debug1: Next authentication method: keyboard-interactive Password: Password: ServerB has pretty limited actions since its a custom propietary linux. What could be happening? EDIT WITH ANSWER: Problem was that I didnt have those settings enabled in the sshd_config (Refer to accepted answer) AND that while pasting the key from ServerA to ServerB it would interpret the key as 3 separate lines. What I did was, in case you cant use ssh-copy-id like I couldnt. Paste the first line of your key in your "ServerB" authorized_keys file WITHOUT the last 2 characters, then type yourself the missing characters from line 1 and the first one from line 2, this will prevent adding a "new line" between the first and second line of the key. Repeat with the 3d line.

    Read the article

  • Triple-Boot + 4 partition Limit

    - by dsimcha
    I just bought a new hard drive so that I could convert my XP-only machine into an XP-Ubuntu-Windows 7 triple boot machine. Since the drive is absurdly huge (1 TB) I wouldn't mind throwing ReactOS into the mix, too. I just found out that master boot records are limited to 4 entries, meaning 4 primary partitions. I had Windows XP set up on my old drive as a boot partition, a program files partition and a media partition. Since I really didn't want to install XP from scratch, I cloned this setup on my new drive. This leaves me one MBR partition entry for installing Windows 7, Ubuntu and ReactOS. I'd like to avoid having to install XP from scratch like the plague, partly because it's supposed to be a safety net in case things go wrong with my other OS's and because I've invested a lot of time getting it set up exactly the way I like it. Here are the options I've considered and why I don't like them: Install Windows 7 on my media partition. This would work, but I prefer to keep my media partition completely separate from any OS, so that I can reformat an OS partition without affecting my media partition at all. Use wubi or something to install Ubuntu in the same partition as something else. Again, this is brittle. Move all my media to a logical drive on an extended partition. Create another logical drive on this extended partition for Ubuntu. The problem here is that extended partitions are rather brittle--if you nuke one, it renders the rest useless. Just put the old drive back in my computer and run XP off it. Use the new one for the other OS's. The problem here is that the old drive is slower and uses extra power, generates extra heat, etc. Can anyone suggest any other possibilities that I may have overlooked?

    Read the article

  • What may the reason of slowness be (see details in message body)?

    - by Ivan
    I've got a really weird situation I'm beating to solve. A performance problem which looks really like an empty waiting sequence set in code (while it probably isn't so). I've got a pretty powerful dedicated server (10 GB RAM, eight Xeon cores, etc) running Ubuntu 10.04 with all the functionality services (except OpenVPN server used to provide secure access to clients) deployed in separate VirtualBox (vboxheadless) machines (one for the company e-mail server, one for web server and one for accounting/crm server (Firebird + proprietary app server working with Delphi-made clients)). CPU load (as "top" says) is almost always near zero. Host system RAM is close to 100% usage but not overloaded (as very little swapping gets used, and freed (by stopping one of VMs) memory doesn't get reused any quickly). Approximately 50% of guests RAM is used. iostat usually shows near zero %util. Network bandwidth seems to be underused. But the accounting/crm client (a Win32 Delphi application run on WinXP machines) software works hell-slow with this server (and works much better using an inside-LAN Windows server). I just can't imagine what can make it be slow if there are so plenty of CPU, RAM, HDD and bandwidth resources available on clients and on the server even in their hardest moments. Saying bandwidth is underused I not only know that clients and the server are connected to the Internet with a bigger channels than really used (which leaves the a chance they may have a bottleneck of a sort on the route between them), I've tested bandwidth between clients and the server by copying files among them.

    Read the article

  • Proper approach to debug PC startup problems (POST)

    - by saurabhj
    My CPU was heating up to around 65 deg C and last time this had happened (about a year ago), I got thermal paste put between the CPU and heat sink and this managed to get it down to about 45 - 50 degrees. This time, I got some thermal paste and put it myself. However, my PC is not showing the POST display and not starting up. This is what happens LEDs light up HDDs spin Mouse is getting power All fans including the processor fan starts No display on monitor No diagnostic beep sounds (no sounds at all) What I have tried Removing everything including RAM, HDD, PCI cards, AGP card Boot up machine No changes from first state. What steps can I take to figure out where the problem lies? Note (might be important) When I removed the heat sink, the processor came out with it (it was stuck to it inspite of the processor latch on) Had to pry it separate with a screw-driver. Configuration Pentium 4, 2.8 Ghz with HT (very old, I know) Original Intel Mobo with onboard sound and graphics (GB series) 2x512 Mb DDR-RAM 2 SATA disks (320 Gigs / 250 gigs) DVD Writer Creative Sound Card Network card Any help would be appreciated. Thanks!

    Read the article

  • when to upgrade server to include more cores, versus more processors, versus additional server?

    - by gkdsp
    The server hosting market is separated into single, double, qual, etc., processors, where each processor has several cores, or CPUs. My company will offer a Linux-based web application that relies on an Apache web server and a middle tier for business logic. The middle tier is used to crunch math, and return result to a client. Many clients may access the application simultaneously. The company will start with one processor having 4 cores. I'm trying to understand how the app uses the cores and then how to scale the application as business grows, in terms of servers/processors/cores. For example, I'd assume initially one core would be used for Apache, and the other 3 used to process client's requests for math crunching... Question 1: does that mean, with the 3 cores available, I can handle 3 separate client requests simultaneously (e.g. 1 for each of 3 cores)? I mean, except for the shared RAM, is this effectively like having 3 individual machines (from pt of view or processing client requests simulaneously)? Or, only one client's request may be processed at any one time, but that client's request is divided up into up to 3 cores depending on the type of process running that does the math crunching and whether or not it can take advantage of multi threading (so the # of cores impacts how fast any one client request completes)? I'm confused about what the cores mean to the application here. Question 2: As the business grows and more client requests need to be processed, should the server be upgraded to (A) a new machine with more cores, (B) a new machine with two processors, 4 cores each, or (C) keep the original server and add another server with a single processor? Which route provides the most efficient way to scale the application, in terms of processing more client requests per time interval? Is the choice, for example, limited by RAM (when you need more RAM than box can handle it's time to add another server), or something else? Question 3: Is the total number of client requests processed simultaneously equal to the number of cores times the number of servers (minus the one core for Apache)?

    Read the article

  • Google Apps Domain Level Shared Contacts?

    - by dkirk
    My firm just switched to Google Apps Premiere addition 2 weeks ago and aside from the way Google handles shared contacts, things are going quite well. Previously, on our Exchange server we had numerous shared contact lists set up in the shared folders. We had a separate list for vendors, sales agents, etc.. Is there not a way to set up lists or groups such as this on the domain level in Google Apps? I have found a ton of forums with users asking the same question but no good answers unless you purchase some third party app in the marketplace. I have toyed around with the "google-shared-contacts-client" here: http://code.google.com/p/google-shared-contacts-client/ and this almost does it but it falls short when trying to group contacts on the domain level or when trying to search for a contact by company name. Are either of these things possible? I am now looking to create a Google Doc spreadsheet to share with the domain just to have a separated defined list of contacts that is search-able by various fields... Anyone who could shed some light on domain level contact sharing relating to the points above, I would be most grateful...

    Read the article

  • Debian grub2 update removed Windows boot option.

    - by Wrikken
    Since I updated grub to grub 2 I no longer get the option to boot to Windows (which is unfortunately sometimes necessary for proprietary MSIE browser plugins I need to use for work). Relevant /boot/grub/menu.lst portion: ### END DEBIAN AUTOMAGIC KERNELS LIST # This is a divider, added to separate the menu items below from the Debian # ones. title Other operating systems: root # This entry automatically added by the Debian installer for a non-linux OS # on /dev/hda1 title Windows NT/2000/XP root (hd0,0) savedefault makeactive chainloader +1 This however does not appear anymore. I do have some entries in /boot/grub/grub.cfg with entries like these: menuentry 'Debian GNU/Linux, with Linux 2.6.32-5-amd64' --class debian --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd1,msdos1)' search --no-floppy --fs-uuid --set e638c434-4884-412f-a141-2c194f881fae echo 'Loading Linux 2.6.32-5-amd64 ...' linux /boot/vmlinuz-2.6.32-5-amd64 root=UUID=e638c434-4884-412f-a141-2c194f881fae ro quiet echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.32-5-amd64 } Do I have to alter that file? If so, what is the correct syntax for a Windows boot? If not, what could be the problem?

    Read the article

  • How to "drag and drop" folders or multiple HTML files into a browser and have them open in multiple tabs

    - by PoorLuzer
    I save pages that I browse on the net and find interesting into a folder called C:\PageSaves Later, during the commute, I open these pages to see what they are and move them into a neatly categorized folder tree. For example, Perl related pages goto C:\Pages\Perl, MySQL related pages goto C:\Pages\MySQL and so on. I was wondering if there is any way I could open any number of HTML files on disc / inside a folder (C:\PageSaves in my case) into Mozilla/FF/K-Meleon etc For example, I would like to just "drag and drop" the folder C:\PageSaves into FireFox and have it open all the .html pages in the folder in a separate tab Right now, if I "drag and drop" multiple HTML files, it just opens the last file in the selection. Have a set of toolbar buttons, basically, a (the) plugin that should allow me to nuke the page (if I don't want to keep the page anymore) from disc or move the file (and its corresponding folder) into a predefined / new folder I am familiar with coding full blown FireFox plugins, so even if something very basic/almost similar exists, I can take it forward. Hints/clues/other methods of achieving the same result are all welcome!

    Read the article

  • How do I encapsulate the application server from the web and database servers?

    - by SNyamathi
    So I've been doing some reading and it seems like the best practice would be to have separate database, application, and web servers. There are a few things that I've failed to understand - please feel free to recommend any reading materials that would address these topics. Database (assume MySQL) Application server communication: Does the database server do any sort of checks on the SQL commands sent / returned, or is it just a "dumb pipe" that responds to SQL commands by spitting back data? Application server (assume Tomcat) Web Server Almost the reverse here, is it the web server that is more of a pipe to the internet that forwards requests to the application server and spits back responses? I'm not wording this well, but I'm trying to ask - is it the application server that is responsible for validating data received by from requests? ex: Parsing POSTs Validating user logins Encrypting decrypting data Furthermore, how do these two servers communicate? I'm trying to keep things as flexible as possible here, so while I could write a web server in Java and use Java to communicate between the web and app server, that doesn't sound very modular. What if I want to use Python or some other language to replace the web server later on? What if I want to make a non-web facing application used in house written in C++ or something.

    Read the article

  • MS Word TOC that references # pages rather than page number

    - by buttonsrtoys
    We frequently need to write specifications in Word which require a TOC that refers to the total number of pages in a section, rather than the page number. E.g., Section No. Pages 01010 Summary of Work..............5 01025 Prices.......................2 01400 Quality Control..............1 01700 Contract Close Out...........2 A wrinkle is that each section is a separate file. To date, we've been writing or TOC by hand, which has introduced every error imaginable. Is there an MS feature that populates a TOC with page totals? If not, I've done a little VB in Office, so wouldn't be opposed to that route as need be, as long as it was usable by our low tech users. Related question - all the section files are in the same folder. It would be nice if the TOC loaded every file in a folder, rather than having to specify each one. Is this a feature of Word or would this require VB? We tried a master document with links to subdocuments, but since the number of section files ebbs and flows with each project, the approach required too much maintenance for our Wordophobes.

    Read the article

  • Standing and sitting while using a computer at work

    - by Adam Batkin
    I would like to be able to comfortably switch between sitting and standing while at work (I'm a software developer, so I spend most of my day in front of a computer). For the past couple of months I have been using a large elevated stand that sits on my desk (designed expressly for this purpose) containing my keyboard and mouse, and my monitors have been raised as high as possible and aimed upwards. So I can stand all day and I'm pretty comfortable (my right wrist may be at too much of an angle when it's on my mouse, but that's a separate issue). The only problem is that sometimes I want to be able to sit. I can easily place my keyboard and mouse back down under the elevated stand, but I have to look up pretty steeply and that is uncomfortable and makes it difficult to see the screens since they are tilted upwards. My monitor mounts are difficult to adjust quickly/easily, so I can't just re-aim them. I would of course love one of those hydraulic standing/sitting desks (cost isn't the problem). But I'm in a row of "trader-style" desks where it's basically a very long surface with people sitting at 6-foot intervals. What type of equipment do you recommend? I suppose the best thing would be some sort of monitor stand (it must be able to hold 2-3 LCDs) that can easily be lowered and raised. But any other suggestions are also welcome.

    Read the article

  • Why does Google Analytics use two domains?

    - by AKeller
    I'm building a distributed widget that is comparable to Google Analytics. Users will add a <script> tag to their site that references my widget's JavaScript file. The Google Analytics tracking code looks like this: var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-XXXXXXXX-X']); _gaq.push(['_trackPageview']); (function () { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); Can anyone explain the reasoning behind separate HTTP and HTTPS hostnames? My instinct is to just secure the www address and then use the protocol-less syntax, like //www.google-analytics.com/ga.js. But I'm sure the Google Analytics architects put a lot of thought into this approach. I'd love to understand their logic before I follow/ignore their model.

    Read the article

  • Sending same email through two different accounts on different domains using Outlook 2010

    - by bot
    I am a programmer and don't have experience in Outlook configurations. Our company has two email domains namely xyz.com and xyz.biz. Each employee has an email id on one of these domains but not both depending on the project they are working on. The problem we are facing is that when a communication email is sent from the Accounts, HR, Admin, etc departments, they need to send the email twice. Once through the xyz.com account to all employees with an email address on xyz.com and once through xyz.biz to all employees with an email address on xyz.biz. I am not sure why they have to send two separate emails but the IT team has directed all departments to do so as there is no other solution according to them. Even though two different groups have been created, sending an email to employees in a group of xyz.biz from xyz.com does not seem to work. I want to know if Outlook provides a feature such that we can configure some kind of rules to send an email through an id on xyz.com to all users on xyz.com and the same email gets sent automatically to users on xyz.biz through an id on xyz.biz. The only technical details I know is that we are using Exchange 2003 and the IT team claims that this is a limitation causing the problem. Edit: Our company is split into two main divisions depending on the type of projects. I am pretty sure I use domain XYZ wheras the employees in the other division use the doman ABC to log in into the windows machine or outlook itself. Also, employees in domain XYZ can access the machines on the network in domain ABC but not the other way around

    Read the article

  • map linux drives to windwos 7 for media stream over internet

    - by Ortix92
    I'm trying to map a linux network drive to my windows 7 laptop, however this laptop is not on LAN. At home, I simply use Samba, but this obviously won't work over the internet. I'm trying to avoid VPN, so if there are other solutions, I would like to know about them. The reason I ask is because my university does this as well. We can simply map folders to our computers without VPN connections. I'm not sure what they are running as servers. The main reason is because I want to be able to access my files stored on my home server wherever I go. They are located in the /home/ folder (videos, music and pictures folder). I'm trying to keep my websites and media separate from each other. I wouldn't mind accessing them from a web interface either, but I would like to keep the directory structure intact. I remember having an app like that come with winamp and running it on my windows pc (As the server). Unfortunately it doesn't work for linux. Any ideas on what I could use? Would XBMC be able to help me out with this? I did do some researching but I couldn't find any concrete answers

    Read the article

  • File permissions on web server

    - by plua
    I have just read this useful article on files permissions, and I am about to implement a as-strict-as-possible file permissions policy on our webserver. Our situation: we have a web server accessed through sftp by different users from within our company, and we have the general public accessing Apache - sometimes uploading files through PHP. I distinguish folders and files by their use. So based on this reading, here is my plan: All people who need to upload files will have separate users. But all of those users will belong to two groups: uploaders, and webserver. Apache will belong to the group webserver. Directories Permission: 771 Owner: user:uploaders Explanation: to access files in the folder, everybody needs to have execute permission. Only uploaders will be adding/removing files, so they also get r+w permission. Files within the web-root Permission: 664 Owner: user:uploaders Explanation: they will be uploaded and changed by different users, so both owner and group need to have w+r permissions. Webserver needs to only read files, so r permission only. Upload-directories Permission: 771 Owner: user:webserver Explanation: when files need to be uploaded, Apache needs to be able to write to this directory. But I figure it is safer to change the owner to webroot, thus giving Apache sufficient privileges (and all uploaders also belong to this group and will have the same permissions), while safeguarding from "others" writing to this folder. Uploaded files Permission: 664 Owner: user:webserver Explanation: after uploading Apache might need to delete files, but this is no problem because they have w+r permission of the folder. So no need to make this file any more accessible than r access for group. Being not an expert on file permissions, my question is whether or not this is the best possible policy for our situation? Any suggestions welcome.

    Read the article

  • Heavy write to Galera cluster - table locked, cluster practically unusable

    - by Joe
    I set up Galera Cluster on 3 nodes. It works perfectly for reading data. I have done simple application to make some test on the cluster. Unfortunately I have to say that the Cluster fails totally when I try to do some writing. Maybe it can be configured differently or I do sth wrong? I have a simple stored procedure: CREATE PROCEDURE testproc(IN p_idWorker INTEGER) BEGIN DECLARE t_id INT DEFAULT -1; DECLARE t_counter INT ; UPDATE test SET idWorker = p_idWorker WHERE counter = 0 AND idWorker IS NULL limit 1; SELECT id FROM test WHERE idWorker = p_idWorker LIMIT 1 INTO t_id; SELECT ABS(MAX(counter)/MIN(counter)) FROM TEST INTO t_counter; SELECT COUNT(*) FROM test WHERE counter = 0 INTO t_counter; IF t_id >= 0 THEN UPDATE test SET counter = counter + 1 WHERE id = t_id; UPDATE test SET idWorker = NULL WHERE id = t_id; SELECT t_counter AS res; ELSE SELECT 'end' AS res; END IF; END $$ Now my simple C# application creates for example 3 MySQL clients in separate threads and each one executes the procedure every 100ms until there is no record where column 'counter' = 0. Unfortunately - after about 10 seconds sth is going bad. On servers there is process 'query_end' that never ends. After that - you cannot make update on the test table, MySQL returns: ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction . You cant even restart mysql. What you can do is to restart server, sometimes whole cluster. Is Galera Cluster so unreliable when you do massive concucurrent writing/updates? Hard to believe.

    Read the article

  • Notify user of message arrival in another mailbox

    - by Tim Alexander
    This is very similar to this question but has a few differences. Basically we have a user dealing with a conflict of interest case. To separate the mail from prying eyes (and the draconian routing system we have in place) the user has been granted access to a second conflicts mailbox that is only accessible to him via OWA. This has worked fine for years but now the user would like a notification to be sent to him when a message arrives in his conflicts mailbox. Initially I thought an Outlook rule would work but of course the client is never logged in so the Outlook rules are never processed. This led me to think that an Exchange Transport rule might work but the only options I can see are to Forward or Copy the message to another user. this would bypass the conflicts setup. All I really need is a notification and not the actual message to be sent. Is this at all possible with Exchange 2007? Or if not is there any thirdparty addition or workaround that anyone has come across?

    Read the article

  • Running WAMP (XAMPP) and LAMP from One SSD, On 64-bit Windows and Linux Machines

    - by nicorellius
    I have an solid state drive that I develop websites on. The reason I do this is because I work on a few different computers. Historically, I created separate developing environments to use for each machine. This was OK, but if the system changed for some reason, eg, new OS install, it was a pain. So I bought a USB 3.0 enclosure and put a solid state drive in there and it's pretty darn fast, which is good. I was working with three Windows machines and I could simply hook up the drive, launch my XAMPP server and away I went, developing websites: using Dreamweaver, Komodo, Notepad++, Eclipse, etc. Recently, however, one of my Windows machines' hard drive went down and instead of going back to Windows in this case, I went with Ububntu 12.04. I have several Ubuntu workstations and servers and I like Linux, so I thought his was a great opportunity to transition. I went to work installing and trying to set up a LAMP server and, besides from XAMPP 64-bit compatibility out of the box, I'm seeing other issues with getting this Linux server running. I will keep trying to resolve this, but in the meantime... my question is, has anyone ever successfully run both WAMP and LAMP from the same SSD (formatted to NTFS)? I'm sure there are lots of barriers to this happening, like local file system, OS libraries, dependencies, etc. But I was thinking it would be cool if it could be done. I'm no expert, so if this is just plain old stupid, please don't hesitate to let me know.

    Read the article

  • Shortcut To Full Screen App In Lion

    - by omghai2u
    I postponed getting OSX Lion for as long as I possibly could. Now that I have it, I'm having lots of difficulties getting it to perform how I want. On Snow Leopard my typical setup for working was 4 spaces. I'd keep a Windows VM open on Space #4 full-screened, a Linux open on space #3, and I'd do other stuff on spaces #1 and #2. My keyboard shortcut allowed me to switch between my Windows work (Command + 4) to my Linux work (Command + 3) very quickly, and without the need for my hands to leave the keyboard (or effectively to even quit typing). Productivity was good. I see that on Lion a full-screened VM (and yes, they need to be full screened, Fusion's Unity won't cut it for what I need to do) is its own separate Desktop. I have set up 4 desktops and made my keyboard shortcuts to move between them Command + # just as before. But how do I get my full-screened VM to be one of those already existing desktops? Or, rather, how do I make a short-cut for the full-screened app?

    Read the article

  • linux networking: how to redirect incoming connections from old server to new server?

    - by aliz
    hi I'm in the process of moving my old server to a new server, but i will keep the old server running for database replication and load balancing, etc. each server has a separate internet connection with a static ip, and they are connected through a local Ethernet connection. I've got Ubuntu 8.04 32-bit running on old server and Debian 6.0 64-bit on new one. shorewall firewall is installed on both servers. there are some outdoor devices which are periodically sending data to port 43597 for old server IP address. I can run multiple instances of the network service which is responsible for receiving data from devices on a server but on different ports. here's the question: how can I run the service on new server and have connections coming to old server redirected to it, and new devices can still connect to new server's IP address preferably on the same port and same service? until all devices get updated to send to new server. I've tried a shorewall DNAT rule, but seems like new server's default route should be changed to ethernet connection, which breaks other things. I also found about redir utility, but still haven't tried it. is there any best practice or simple solution for such a scenario, i'm not aware of? thanks in advance.

    Read the article

  • How to correctly setup home directories and permissions on a mounted partition.

    - by user36505
    I'm setting up a Fedora 12 server. I have a root (/) partition where the boot (/boot) partition is mounted and then a separate partition (/files) for separating home directories and shares away from the other partitions. The filesystem mounts fine and users can be created to have home directories in /files/home/[user] just fine. However, when I log in as one of those users, I get an error saying "Cannot chdir in to /files/home/[user]: permission denied". If I create a user under the default /home using the same process, everything works fine. The same goes for when I try and browse a share in windows; I can see the shares, but cannot access them. The permissions and owners on /files and /files/home are the same as /home. When the user is created, the user directory owner and permissions are also the same. How can I set the /files partition up so that it can be used as a home directory and for samba sharing rather than using the root (/) partition? Thanks.

    Read the article

  • VOIP and internet connection speeds [cable vs. fiber]

    - by microchasm
    Our office is migrating to IP telephony. We have less than 10 employees that will be using the phones. We currently have cable internet, and they just bumped the speeds: There is a data center that was just recently built in our building, and we were considering co-lo'ing there in the near future. As a result, they offered us access to their triple-redundant internet, but it's quite expensive. They are offering 3mbps committed with up to 10mbps burst for $250/month (discounted). We pay ~$120 for our cable (which the plan was to keep--at least for TV). I want the phone system and LAN to be as separate as possible. Was thinking about keeping the cable for LAN, and using the other connection for the phones (until I saw the price). Now I'm thinking it might make sense to add on to our existing cable setup, and change our phone to only have DSL as a backup for the cable. Is there any real benefit to the fiber? Especially for the price? Any other suggestions or ideas? Thanks.

    Read the article

< Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >