Search Results

Search found 41511 results on 1661 pages for 'via point'.

Page 282/1661 | < Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >

  • What are possible reasons why a calendar entry in OWA is at a different time than in Outlook?

    - by Ken Pespisa
    We have two Exchange 2003 servers, our primary server and a front-end server that hosts Outlook Web Access (OWA). When I open my boss' calendar via Outlook 2007 (from my Outlook client as well as hers) I see the event scheduled for 10:30 am. When I open her calendar via Outlook Web Access, the same event is scheduled for 4:30 am. I don't understand Exchange well-enough to imagine how this is possible. If you have any ideas why this could be happening, I greatly appreciate it. I'd also very much appreciate any insight you have to how this could be possible. There must be some cached data on the front-end server that causes the calendar entry to appear at a different time, I suppose. Any insight into how Exchange manages that cache and where I could look for an issue would be very helpful. Thank you!

    Read the article

  • How should we serve files in a small bioinformatics cluster?

    - by cespinoza
    We have a small cluster of six ubuntu servers. We run bioinformatics analyses on these clusters. Each analysis takes about 24 hours to complete, each core i7 server can handle 2 at a time, takes as input about 5GB data and outputs about 10-25GB of data. We run dozens of these a week. The software is a hodgepodge of custom perl scripts and 3rd party sequence alignment software written in C/C++. Currently, files are served from two of the compute nodes (yes, we're using compute nodes as file servers)-- each node has 5 1TB sata drives mounted separately (no raid) and is pooled via glusterfs 2.0.1. They each have as 3 bonded intel ethernet pci gigabit ethernet cards, attached to a d-link DGS-1224T switch ($300 24 port consumer-level). We are not currently using jumbo frames (not sure why, actually). The two file-serving compute nodes are then mirrored via glusterfs. Each of the four other nodes mounts the files via glusterfs. The files are all large (4gb+), and are stored as bare files (no database/etc) if that matters. As you can imagine, this is a bit of a mess that grew organically without forethought and we want to improve it now that we're running out of space. Our analyses are I/O intensive and it is a bottle neck-- we're only getting 140mB/sec between the two fileservers, maybe 50mb/sec from the clients (which only have single NICs). We have a flexible budget which I can probably get up $5k or so. How should we spend our budget? We need at least 10TB of storage fast enough to serve all nodes. How fast/big does the cpu/memory of such a file server have to be? Should we use NFS, ATA over Ethernet, iSCSI, Glusterfs, or something else? Should we buy two or more servers and create some sort of storage cluster, or is 1 server enough for such a small number of nodes? Should we invest in faster NICs (say, PCI-express cards with multiple connectors)? The switch? Should we use raid, if so, hardware or software? and which raid (5, 6, 10, etc)? Any ideas appreciated. We're biologists, not IT gurus.

    Read the article

  • Create True VLAN over RAS

    - by Bigbohne
    Hi, I was wondering if it's possible. I want to create a virtual network over RAS using Windows Server 2003. The Client should be able to connect to the server using L2TP and should get an IP Adress from a private Range (lets say 192.168.1.100 - 192.168.1.200 and a subnetmask of 255.255.255.0). Now each client connected to the server should be able to ping another connected client. e.g. 192.168.1.123 <- 192.168.1.145 via RAS via the server. Is this possible? And ... how ? best regards, andre

    Read the article

  • How can a Virtualbox host connect to a guest VM when host wireless is disabled / host Ethernet cable is unplugged?

    - by uloBasEI
    I have a Virtualbox VM running on a computer connected to Internet via an Ethernet cable. The guest has a network adapter attached to a NAT. 2 ports (22 and 80) are forwarded so that the host can access them respectively on localhost:2222 and localhost:8080. When the Ethernet cable is plugged, both machine (host and guest) can access Internet and the host can access the SSH server/Webserver which ports are forwarded. When I unplug the Ethernet cable from the host, the host can not access the SSH server/Webserver of the guest anymore. Same situation with a Laptop connected to Internet via wireless when I disable the wireless adapter or set a wrong WPA key. My question is: is there a workaround for the host to access the guest services even if its Ethernet cable is unplugged / wireless is not available?

    Read the article

  • Duplicating keepass files instead of creating a new file

    - by BlakBat
    I'm currently using KeePass 2 and syncing them via dropbox. I have a few KeePass files (one for websites, one to store software licenses, etc...) Every time I need a new KeePass file, I just create a copy of the kbdx file, open it, remove all existing entries, change the key transformation rounds to another pseudo-random value. I do not change the master password. I want to know if this was unsafe practice, or was a security risk, compared to just creating a new KeePass file via the "File-New" menu. The reason I don't use the menu: i'm lazy enough to not want to reconfigure "database settings" every time.

    Read the article

  • If the WiFi switched on, should it disable the ethernet? [closed]

    - by Peter Stuart
    My friend having problems with her laptop and I am trying to help her via SMS. She can't get her laptop connected to the internet via the Ethernet connection and there is no WiFi in the area. Could it be because her WiFi is switch on, she is using an acer aspire. If she manually switches it off could that allow the ethernet connecttion to work? Or is it a missing driver? The cable works fine as her someone else tried it. Thanks Peter

    Read the article

  • Using Queries with Coherence Read-Through Caches

    - by jpurdy
    Applications that rely on partial caches of databases, and use read-through to maintain those caches, have some trade-offs if queries are required. Coherence does not support push-down queries, so queries will apply only to data that currently exists in the cache. This is technically consistent with "read committed" semantics, but the potential absence of data may make the results so unintuitive as to be useless for most use cases (depending on how much of the database is held in cache). Alternatively, the application itself may manually "push down" queries to the database, either retrieving results equivalent to querying the cache directly, or may query the database for a key set and read the values from the cache (relying on read-through to handle any missing values). Obviously, if the result set is too large, reading through the cache may cause significant thrashing. It's also worth pointing out that if the cache is asynchronously synchronized with the database (perhaps via database change listener), that an application may commit a transaction to the database, then generate a key set from the database via a query, then read cache entries through the cache, possibly resulting in a race condition where the application sees older data than it had previously committed. In theory this is not problematic but in practice it is very unintuitive. For this reason it often makes sense to invalidate the cache when updating the database, forcing the next read-through to update the cache.

    Read the article

  • Higher Performance With Spritesheets Than With Rotating Using C# and XNA 4.0?

    - by Manuel Maier
    I would like to know what the performance difference is between using multiple sprites in one file (sprite sheets) to draw a game-character being able to face in 4 directions and using one sprite per file but rotating that character to my needs. I am aware that the sprite sheet method restricts the character to only be able to look into predefined directions, whereas the rotation method would give the character the freedom of "looking everywhere". Here's an example of what I am doing: Single Sprite Method Assuming I have a 64x64 texture that points north. So I do the following if I wanted it to point east: spriteBatch.Draw( _sampleTexture, new Rectangle(200, 100, 64, 64), null, Color.White, (float)(Math.PI / 2), Vector2.Zero, SpriteEffects.None, 0); Multiple Sprite Method Now I got a sprite sheet (128x128) where the top-left 64x64 section contains a sprite pointing north, top-right 64x64 section points east, and so forth. And to make it point east, i do the following: spriteBatch.Draw( _sampleSpritesheet, new Rectangle(400, 100, 64, 64), new Rectangle(64, 0, 64, 64), Color.White); So which of these methods is using less CPU-time and what are the pro's and con's? Is .NET/XNA optimizing this in any way (e.g. it notices that the same call was done last frame and then just uses an already rendered/rotated image thats still in memory)?

    Read the article

  • Purpose oriented user accounts on a single desktop?

    - by dd_dent
    Starting point: I currently do development for Dynamics Ax, Android and an occasional dabble with Wordpress and Python. Soon, I'll start a project involving setting up WP on Google Apps Engine. Everything is, and should continue to, run from the same PC (running Linux Mint). Issue: I'm afraid of botching/bogging down my setup due to tinkering/installing multiple runtimes/IDE's/SDK's/Services, so I was thinking of using multiple users, each purposed to handle the task at hand (web, Android etc) and making each user as inert as possible to one another. What I need to know is the following: Is this a good/feasible practice? The second closest thing to this using remote desktops connections, either to computers or to VM's, which I'd rather avoid. What about switching users? Can it be made seamless? Anything else I should know? Update and clarification regarding VM's and whatnot: The reason I wish to avoid resorting to VM's is that I dislike the performance impact and sluggishness associated with it. I also suspect it might add a layer of complexity I wish to avoid. This answer by Wyatt is interesting but I think it's only partly suited for requirements (web development for example). Also, in reference to the point made about system wide installs, there is a level compromise I should accept as experessed by this for example. This option suggested by 9000 is also enticing (more than VM's actually) and by no means do I intend to "Juggle" JVMs and whatnot, partly due to the reason mentioned before. Regarding complexity, I agree and would consider what was said, only from my experience I tend to pollute my work environment with SDKs and runtimes I tried and discarded, which would occasionally leave leftovers which cause issues throught the session. What I really want is a set of well defined, non virtualized sessions from which I can choose at my leisure and be mostly (to a reasonable extent) safe from affecting each session from the other. And what I'm really asking is if and how can this be done using user accounts.

    Read the article

  • Installer can't create shortcuts - Vista Home Premium

    - by teponta
    Suddenly, whenever I try to install something new on my system, all goes well until it gets to the point of creating Start Menu icons. At this point, I get an alert saying that the installer doesn't have permission to access the Start Menu folder, and my only options are Ignore, which just keeps triggering the same alert, and Cancel, which totally undoes the installation. I've tried disabling UAC (which is a feature I detest anyway), and running the installer as administrator from a R-click. Neither works. I also have 8 subfolders under my c:\users folder with various names, som of which I can look into and some which I cant. I have no idea where all this stuff came from, since I have a personal PC for home use and nobody uses it but me. Any suggestions, anyone? Thanx, T.E.Ponta

    Read the article

  • PHPMyAdmin running very slow over internet but fine locally

    - by columbo
    I connect to PHPMyAdmin remotely on a Centos server using my local PC via Firefox. Usually it's fine but today it's really slow (2 minutes to load a page), sometimes timing out. Other connections to the server are fine. The SSH command line is as fast as ever as is the GNOME dekstop over SSH. In fact on the GNOME desktop I can run PHPMyAdmin locally from its browser and it's as quick as ever (which is a solution to the problem of course). I've checked the various log files and seen nothing unusual, I've logged into the MySQL command line and the database is running fine without any slowing what so ever. So it just seems to be slow when I access PHPMyAdmin on the server from the browser on my remote PC (I've tried IE and Firefox, both are slow). Has anyone experienced this or have any ideas what the issue could be. Connecting via CLI through tunnel works OK - problem is in phpMyAdmin for sure. Cheers

    Read the article

  • How to issue machine certificates to Android devices trying to connect to L2TP VPN (L2TP/IPsec with Certificate)?

    - by John Hendrix
    I are trying to find a way to connect Android devices to our VPN box running Windows Sever 2008. We manage to configure a couple Android devices to connect via PPTP. However, I would like to be able to connect using L2TP/IPSec with certificates instead. I've managed to export and apply the Enterprise CA's certificate on the Android phone, but are totally lost on how to issue a machine certificate to the Android phone. Is it even possible? If so, what are steps I should take to issue the machine certificate and enable the Android phone to connect via L2TP/IPsec with certificates? Thank you for your help!

    Read the article

  • Mac OSX 10.8 Server DNS Domain Routing

    - by Oldek
    I just cant seem to figure out the logic in how to configure my Mac Server. So I have set up an DNS, which will take the domain and all subdomains and point towards an IP. File: db.mydomain.com (in /var/named/) mydomain.com. 10800 IN SOA mydomain.com. admin.mydomain.com. ( 2012110903 ; serial 3600 ; refresh (1 hour) 900 ; retry (15 minutes) 1209600 ; expire (2 weeks) 86400 ; minimum (1 day) ) 10800 IN NS mydomain.com. 10800 IN A 10.0.1.2 www.mydomain.com. 10800 IN A 10.0.1.2 So I want all of these requests to be requested to the 10.0.1.2 server, as I run 2 servers in my cluster. This one has always handled the requests, and now I want to add a server in between. So the server in between will get all the signals from my router which NAT the trafic coming from outside. So after setting this up and trying to point my port 80 towards my new server which will be the middle point, it doesn't work. Is it even possible to do it this way? First server: Mac Second server: Linux So what I try to achieve once more: 1. User goes to mydomain.com or www.mydomain.com 2. User request gets handled by my first server 3. First server refers to a local server, which is only available locally (it is configured to allow requests on port 80 and handle them) 4. Second server receives signal 5. Second server returns a request (either directly send to user or send through first server, whichever is most secure and configurable) I also want to be able to set up domains that lead to other servers in the future, and some that are only available within the VPN. (If that changes anything) I hope some kind soul could help me with this, it is really cumbersome for my mind to get the logic here. Do I have to configure my other server in any way? /Marcus

    Read the article

  • It is not quantifiably better

    - by MarkPearl
    An interesting statement I have heard recently in one of the organizations that I have been working with is that some of the agile processes that we are implementing are not quanitfiably better than the traditional processes they had before. This seemed to be the motivation for not moving the new process to the rest of the organization or expanding it. They would say, “the team seems to be happier than they were before but the improvement is not quanitifiable and until we can quantify it on paper we cannot make any further changes”. Up till recently I thought this was a problem until it dawned on me that their existing system was not being quantified, meaning even if I managed to quantify what we were doing (which I can), what would we be comparing it to? An appropriate response to someone when they give this reasoning is - "That's a very good point, let's go over the quantifiable attributes of your existing processes and see if we can get some common metric that we can compare them on?" If they then are able to produce some quantifiable metrics, you win because you now have something to compare it to, and if they don't then you can politely point the logic of that out as well.

    Read the article

  • How to divide work to a network of computers?

    - by Morpork
    Imagine a scenario as follows: Lets say you have a central computer which generates a lot of data. This data must go through some processing, which unfortunately takes longer than to generate. In order for the processing to catch up with real time, we plug in more slave computers. Further, we must take into account the possibility of slaves dropping out of the network mid-job as well as additional slaves being added. The central computer should ensure that all jobs are finished to its satisfaction, and that jobs dropped by a slave are retasked to another. The main question is: What approach should I use to achieve this? But perhaps the following would help me arrive at an answer: Is there a name or design pattern to what I am trying to do? What domain of knowledge do I need to achieve the goal of getting these computers to talk to each other? (eg. will a database, which I have some knowledge of, be enough or will this involve sockets, which I have yet to have knowledge of?) Are there any examples of such a system? The main question is a bit general so it would be good to have a starting point/reference point. Note I am assuming constraints of c++ and windows so solutions pointing in that direction would be appreciated.

    Read the article

  • Dual Monitors don't turn off/sleep when they should

    - by Mario
    Details: Dual 25" monitors connected to HP Envy via HDMI and display port via DVI adapter. Power scheme is set up in high performance (Dim Display: Never - Turn Off Display: 15 mins - Computer Sleep: Never) Screensaver is set up to kick in after 10 mins of idle (which happens) 5 minutes later, the screensaver stops. The "Monitor Going to Sleep" notice appears on screen and monitors go to sleep briefly. All is well thus far. Suddenly, the Windows 7 alert sound when a device is unplugged is heard. Monitors then turn back on. Screens are black. Only the mouse cursor is displayed. Backlighting is back on. This only started happening after I obtained and connected the second 25" monitor a few days ago. However, I had a 24" in its place before, and this wasn't happening. Why is this happening and how do I correct this behavior? Thanks in advance.

    Read the article

  • Get OpenVPN clients names to resolve through dnsmasq

    - by Fake Name
    I have a PFSense box running as an OpenVPN server. There are several remote devices that connect through the VPN (as tap devices). The VPN stuff is working, I can access the remote hardware by looking up the IP assigned to each device on the PFSense router. What I'd like is to have it so I can resolve the remote hardware addresses via DNS while on the local network. Note that this is only local-network - remote-device (they're backup boxes). I don't need to have the remote devices resolve using the local DNS forwarding agent. I have the rest of the devices on the network that need to be accessible via DNS report their name during the DHCP process. However, the IP assignment for OpenVPN tap clients, while it is dynamic (which is why I need DNS), does not seem to use the local DHCP server. How can I have my openvpn server add information for it's clients to the dnsmask resolver? Is this setup even reasonable (I'm not familiar with openVPN at all)?

    Read the article

  • Trouble running multiple Firefox versions on OS X Lion

    - by politicus
    I am trying to use two versions of Firefox (11 and 13) on OS X Lion. I don't especially want to run two versions at the same time. I want to be able to run the version 11 when I choose the version 11 via the profile manager. The same for the version 13. Every time, I want to launch the version 11, the launcher launches the version 13... When I select the version 11 via the profile manager, Firefox 13 is launched. I created 2 automators apps (FF11.app and FF13.app) to launch Firefox : /Applications/Firefox11.app/Contents/MacOS/firefox-bin -no-remote -P FF11 & /dev/null & /Applications/Firefox13.app/Contents/MacOS/firefox-bin -no-remote -P FF13 & /dev/null & Any help would be greatly appreciated. Thanks in advance.

    Read the article

  • ldap samba user access issue

    - by ancillary
    I have a samba share that is on the LAN. It is auth'd via ldap. Users access file system via ad windows shares. There are shortcuts in directories that point to dir's on samba. Typically a user will click the shortcut to the smb dir, and will be met with a permission denied error. Upon closing explorer and reopening, it will work. DNS is handled by the domain controller, and that is the only server any of the machines use for DNS. Nothing in eventvwr. Only see successful auth entries in samba log. Any ideas?

    Read the article

  • How to combine with openvpn, dynamic-ip?

    - by asfasdv
    Dear everyone, I am currently using openvpn to surf online to bypass censorship. Let me show you the initial scenario: before openvpn is turned on: IP: 1.2.3.4 (hypothetical, checked by visiting whatismyip.com/) after openvpn is turned on IP: 10.2.3.4 (this is also checked with whatismyip.com/, I assume this is where the vpn's exit point's IP ) Situation: Once I enable openvpn, I can still ssh into this computer by sshing into 1.2.3.4, even though visiting whatismyip.com/ says it's 10.2.3.4. However, I am on dynamic IP, I run a website, and am using tools (inadyn in particular) which pings the freedns.afraid.org (my dns server) and updates my ip. The messed up part is when inadyn does so, my dns changes the ip to 10.2.3.4, which is presumably the exit point of my vpn. How do I get around this? (Note that sshing into 1.2.3.4 STILL works).

    Read the article

  • Gathering application architecture

    - by userbb
    Suppose there is system for gathering info about system activities. There is a client part with an interface and there are agent parts that are installed on each machine. I estimate that there could be max 20 computers now. Later could be more like 50. My solutions: Agent stores data into local database e.g. sqlite. There is also a service which can be used by a client to query data. So if a client wants to display data for 50 computers, he sends a query to 50 computers. I'am on that solution now but maybe it's totally wrong. Agent stores data into local database (I don't known good one for that). There is also server (main database) and local databases are synchronized with the server. In this case, a client connects to the main database to display data. Agent sends data in realtime to main database. So same as point 2, but there is no sync. Like in point 3, but agent buffers data in local database and sends it in small chunks to main database. What is the best approach?

    Read the article

  • How to prevent nginx from appending the location to root? [duplicate]

    - by simonszu
    This question already has an answer here: nginx location pathing issue 2 answers I want to serve an Icinga Webview via nginx. This webview should be accessible via myserver.com/icinga (as the debian autoconfig for apache will do). I have the following lines in my nginx config: location /icinga { root /usr/share/icinga/htdocs; index index.html; auth_basic "Restricted"; auth_basic_user_file /etc/icinga/htpasswd.users; } However, i get an error 404 and a log entry that says: *10 open() "/usr/share/icinga/htdocs/icinga" failed (2: No such file or directory), So it seems that nginx appends the location value to the root value. I think i figured it out how to prevent this some time ago, but i did not document it for myself and have forgotten how to do it. And now i can't fix it for myself. Can you tell me how to prevent this behaviour?

    Read the article

  • Ubuntu automatic logout whenever I execute exe files

    - by KeepTrying
    I have a problem. Here's the thing. There were 4 partitions in my hard drive: One for ubuntu root folder One for ubuntu home folder One for general stuffs like music, movies... And the last one for SWAP To install Windows 7, I resized partitions and moving the order of partitions by using GParted. I moved all of the ext formatted partitions to the left, so that means the spare space would be at the right. And I formatted that spare space in NTFS and install windows 7. After successfully installing windows 7, I used LiveUSB to fix grub. I installed Boot Repair and, with just one click, now I can dual boot ubuntu and windows 7. But, the point, because of changing the order of partitions, especially the partition consisting of home folder, I couldn't log in the ubuntu. I used recovery mode and changed file /etc/passwd. Everything almost got back to normal except one thing. The windows apps that I installed via wine don't work anymore. I run them via accessing menu Applications/Wine/Programs but nothing loads. One more thing, when I double click on exe files to run them, ubuntu suddenly log outs. Thank you for reading my post, it's quite long and my English is fairly poor. I'd appreciate for anyone who reads it.

    Read the article

  • What is the best way to get the external internet gateway IP reported periodically?

    - by basilmir
    I have a OS X Server behind an airport extreme, serving services via opened ports on the airport. The server has a 10.0.x.x local address, always the same one. The airport extreme gets it's external IP address via PPPoE, and sometimes... once a week it changes. For security reasons WE ACTUALLY like this behavior. But i need a way to know the external IP address just in case i need to connect and do something to the server while on the outside. What can i do?

    Read the article

  • Shell Script to Start Mysql Server if not running

    - by user103373
    I have written a shell script to start mysql server & send a mail to admin user if it's restarted via shell script. What i am facing an issue if I run this shell script on terminal it's work perfectly & If same script runs via cronjob it's only sending the mail to the user & problem remains same. Is this problem relates to permission & how can i resolve it. Shell Script-------- #!/bin/bash EMAIL="[email protected]" SERVICE='mysql' if ps ax | grep -v grep | grep $SERVICE > /dev/null then echo "$SERVICE service running, everything is fine" else echo "$SERVICE is not running" /etc/init.d/mysql start cat <<EOF | msmtp -a gmail $EMAIL Subject: "Alert (Test Server) : Mysql Service is not running (Manually Restarted)" Mysql Server Restarted at: `date` EOF EXIT I am using msmtp for sending mail to the user on ubuntu 12.04 Server.

    Read the article

< Previous Page | 278 279 280 281 282 283 284 285 286 287 288 289  | Next Page >