Search Results

Search found 12435 results on 498 pages for 'pretty printer'.

Page 339/498 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • Authentication on Exchange using EWS managed API

    - by Jacob Proffitt
    I'm having a weird issue with the Exchange Web Services. The operation I'm attempting is pretty simple—pull a user's calendar items for the current week on our internal website. When testing locally, the ews managed API pulls the calendar information just fine. When deployed to the web server (using integrated windows authentication), it chokes. My trace is telling me that access is denied in the Exchange call. Initially, I thought this was a double-hop NTLM permissions issue, but it turns out that the service actually works for some internal users, but not for most. The only thing I can find that the functioning users have in common is that they are blackberry users and I surmise that their exchange permissions are setup differently. Or are their active directory accounts setup differently? I don't know and it's driving me crazy. I surmise that the blackberry app runs some scripts when a user is added to the application, but I'm completely unfamiliar with what may be going on behind the scenes there. So. Is there a way to duplicate the permissions those users enjoy (either AD or Exchange permissions)? And/or how exactly does one fix the double-hop credentials situation?

    Read the article

  • Intel Wireless 4965AGN not achieving N throughput when connected to an Airport Express N network

    - by BenA
    I have an Intel Wireless WiFi Link 4965AGN adaptor in my laptop (HP Pavillion dv2000 series) which is connecting to a 5Ghz-only 802.11n network provided by an Apple Airport Express. The network is using WPA2 encryption. My desktop is also connected the Airport, via a Linksys WUSB600N USB adaptor. Both are running with the latest drivers, and the Airport is running the latest firmware. The Airport is also configured to use wide channels. The problem I have is that I never get throughput above 4MB/s when transferring files between the two machines. Even a pessimistic calculation shows a 270Mbps network as being capable of transfer rates at well above 10MB/s. I'm pretty sure I've isolated the issue to being the Intel adaptor, as wiring the desktop to the AP, and using the Linksys adaptor on the laptop immediately yielded speeds limited by the 100MB/s ethernet connection. I know that 802.11n is still a draft standard, and so mixing kit from different manufacturers can easily lead to poor results, but I was just wondering if anybody else out there has had success with this Intel adaptor on an N network? Or even better, connecting it to an Airport Express? Can anybody give me any advice on how to troubleshoot this issue? I should also mention that the Airport Express doesn't allow you to manually specify channels when running in N mode, and that I've been able to rule out interference from other Wireless LANs by scanning. There aren't any other 5GHz networks in my area. All ideas welcome! Update: A while later, I've just updated to the most recent drivers for both the Intel chip in the laptop, and the USB adaptor. Unfortunately this hasn't improved things :(. If anybody has any advice it would be be gratefully received.

    Read the article

  • Memcached session manager in Azure: Connection gets forcibly closed

    - by Edgar Pérez
    I am using Memcached Session Manager to handle Tomcat sessions in non-sticky mode. My deployment in Azure consists of a Worker Role with two instances which connect to an Azure VM running my Memcached server. Everything works pretty well, my session is persisted and retrieved by any of the two instances transparently. The problem arises when the session is idle for about 4 minutes; everything points out that the Azure Loadbalancer is closing the spymemcached connection to the VM after some period of inactivity. My MSM configuration is this: <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager" memcachedNodes="n1:my-azure-vm.cloudapp.net:11211" sticky="false" sessionBackupAsync="false" sessionBackupTimeout="10000" lockingMode="uriPattern:/path1|/path2" requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js|ttf|eot|svg|woff)$" transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory" customConverter="de.javakaffee.web.msm.serializer.kryo.HibernateCollectionsSerializerFactory"/> The stacktrace printed by the spymemcached client is this: INFO net.spy.memcached.MemcachedConnection: Reconnecting due to exception on {QA sa=/10.194.132.206:13000, #Rops=1, #Wops=0, #iq=0, topRop=net.spy.memcached.protocol.binary.StoreOperationImpl@1d95da8, topWop=null, toWrite=0, interested=1} java.io.IOException: An existing connection was forcibly closed by the remote host at sun.nio.ch.SocketDispatcher.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(Unknown Source) at sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source) at sun.nio.ch.IOUtil.read(Unknown Source) at sun.nio.ch.SocketChannelImpl.read(Unknown Source) at net.spy.memcached.MemcachedConnection.handleReads (MemcachedConnection.java:303) at net.spy.memcached.MemcachedConnection.handleIO (MemcachedConnection.java:264) at net.spy.memcached.MemcachedConnection.handleIO (MemcachedConnection.java:184) at net.spy.memcached.MemcachedClient.run(MemcachedClient.java:1298) Given this idle time limitation in Azure, is there any other way to make MSM work in the azure cloud?

    Read the article

  • CloneZilla PXE Boot Without NFS

    - by John
    I am trying to setup CloneZilla to be bootable via PXE without using NFS. I do not have NFS running on our PXE server and would like to keep it that way. However, most of the information that I have found online indicates that you need to setup NFS in order to PXE boot CloneZilla. I believe that I am pretty close in getting it to work, but am not sure where to go next. Listed below are the different PXE menu option configurations that I have used so far. LABEL Clonezilla Live MENU LABEL Clonezilla Live KERNEL utilities/clonezilla/vmlinuz APPEND initrd=utilities/clonezilla/initrd.img boot=live live-config noswap nolocales edd=on nomodeset ocs_live_run="ocs-live-general" ocs_live_extra_param="" ocs_live_keymap="" ocs_live_batch="no" o$ I have also tried the following append lines, without success: APPEND initrd=utilities/clonezilla/initrd.img boot=live union=aufs noswap noprompt vga=788 fetch=tftp://10.130.155.23/filesystem.squashfs APPEND initrd=utilities/clonezilla/initrd.img boot=live union=aufs noswap noprompt vga=normal nomodeset nosplash fetch=tftp://10.130.155.23/filesystem.squashfs Each of them have resulted in a no go with the following error: "Unable to find a live file system on the network". It looks like it gets to the point of trying to load the filesystem.squashfs file, hangs, and then throws the error. Any help would be greatly appreciated.

    Read the article

  • Webmin / Virtualmin running php as www-data, is locked out of viewing .htaccess and writing

    - by Kirill
    I've asked this on the virtualmin forums, but haven't had any help from there. Recently, "something" happened and it seems that the apache service has gone a bit weird. What it does: it runs all apache traffic as www-data and sometimes spawns the php5-cgi process as www-data, this is a problem because all the domain users own their directories and default permissions don't let www-data write to these folders (file uploads are dead) or read .htaccess (permalinks are broken in wordpress). I've googled this for about a week straight now, tried pretty much everything I could find and achieved nothing. The only thing that I think might actually be the cause of all this is this page: http:// - i.imgur.com/NYW3x.png (got shut down by the spam filter) So I figured if I set it to "default", this might magically start working again, but all it does is "crash" apache (all websites timeout). I figure it's something to do with the "mpm" module or something, but I can't find anything relevant in the settings to modify for it to work. Can someone please point me in the right direction? System info: Webmin version 1.580 Kernel and CPU Linux 2.6.35.4-rscloud on x86_64 Virtualmin version 3.90.gpl GPL Ubuntu 10.04 LTS (Lucid) A couple screenshots of top http://i.imgur.com/U2DTK.png http://i.imgur.com/sNPKs.png

    Read the article

  • Implement QoS/Bandwidth Management or Upgrade Bandwidth?

    - by Michael
    A question that I'm faced with currently. Here's my setup: Cisco ASA 5510 15Mbps Internet Connection @ $1350/month The bandwidth was originally meant for 35-45 people but we've grown quite quickly to roughly 60-65 people. Needless to say, when I check bandwidth logs it's almost always spiked at 15Mbps. I did use Wireshark to do some poking around to see what was hogging up our bandwidth but with everything running through CDNs and Cloud Services it proved difficult to get a good grasp of where our bandwidth was going. So the question is do I ONLY implement bandwidth management through ASA OR upgrade the Internet to 50Mbps ($1600/month) and then implement bandwidth management through ASA? Any suggestions on how to segment the 15Mbps connection if we decided ONLY to go with the bandwidth management solution? Thanks. UPDATE 1 Installed PRTG and used packet content to monitor the traffic. As I suspected still pretty vague. My Top Connections include the following: a204-2-160-16.deploy.akamaitechnologies.com ec2-50-16-212-159.compute-1.amazonaws.com a204-2-160-48.deploy.akamaitechnologies.com a72-247-247-133.deploy.akamaitechnologies.com mediaserver-sv5-t1-1.pandora.com Other than the Pandora destination, the rest doesn't tell me much on how to properly control the bandwidth. Any thoughts or suggestions? Thanks. M

    Read the article

  • Sending Adobe PDF attachments from Adobe Reader (in Outlook 2003) takes too long

    - by White Island
    I have a customer who is using Outlook 2003 (Microsoft Online Services) and Adobe reader 9+. When they send a PDF from Adobe reader to Outlook (via the Send as attachment to e-mail feature in Adobe), it freezes for 30 seconds to 5 minutes before the new e-mail pops up with the PDF attachment. I'm pretty sure the issue is on the Outlook side of things, as I've tried Adobe reader 8 and Foxit Reader with the same results (Windows XP/7 doesn't seem to make a difference, either). I tried Outlook in safe mode on the first (Win7) machine I was working on, and the e-mail attachment worked a lot faster, but when I tried to replicate the results on another machine, one wouldn't go into safe mode, the other didn't seem to show a difference. In an effort to fix the problem in Outlook normal mode, I tried disabling all add-ins, Com add-in (Office Communicator is the only one), reading pane, Word 2003 as e-mail editor... but none of these seemed to address the issue. Does anyone have any other ideas? I need to get this resolved as soon as possible, and it doesn't seem practical to make them run in safe mode. :P

    Read the article

  • First Request to IIS Express Fails with 503 Service Unavailable, Second Succeeds

    - by Chris Moschini
    Each time I start my ASP.Net MVC 3 app from Visual Studio 2010, IIS Express launches and IE spins waiting. The request fails with HTTP 503 Service Unavailable. I hit Refresh in IE, and the request succeeds. All subsequent requests succeed until I stop debugging. The next time I go to start debugging, the first request fails again. Has anyone else experienced this? In IISExpress\applicationhost.config I have: <site name="ProjectName" id="6"> <application path="/" applicationPool="Clr4IntegratedAppPool"> <virtualDirectory path="/" physicalPath="c:\users\chris\dropbox\code\2010\SolutionName\ProjectName" /> </application> <bindings> <binding protocol="http" bindingInformation="*:80:laptop" /> </bindings> </site> I have this in my hosts file: 127.0.0.1 laptop And my Project is set to start with IIS Express, with Project Url set to: http://laptop It's very strange that only the first request fails, perhaps as though Visual Studio isn't waiting long enough for IIS Express to start? Is there some way to make it wait? Stopping debugging, making a change, and then starting again is one of the most common tasks I do so adding another step to get there is pretty annoying.

    Read the article

  • What is Apache Synapse?

    - by Aren B
    My website keeps getting hit by odd requests with the following user-agent string: Mozilla/4.0 (compatible; Synapse) Using our friendly tool Google I was able to determine this is the hallmark calling-card of our friendly neighborhood Apache Synapse. A 'Lightweight ESB (Enterprise Service Bus)'. Now, based on this information I was able to gather, I still have no clue what this tool is used for. All I can tell is that is has something to do with Web-Services, and supports a variety of protocols. The Info page only leads me to conclude it has something to do with proxies, and web-services. The problem I've run into is that while normally I wouldn't care, we're getting hit quite a bit by Russian IPs (not that russian's are bad, but our site is pretty regionally specific), and when they do they're shoving wierd (not xss/malicious at least not yet) values into our query string parameters. Things like &PageNum=-1 or &Brand=25/5/2010 9:04:52 PM. Before I go ahead and block these ips/useragent from our site, I'd like some help understanding just what is going on. Any help would be greatly appreciated :)

    Read the article

  • ClearOS - how to create a site to site VPN between two ClearOS boxes?

    - by Scott Szretter
    I plan on setting up some ClearOS boxes at several sites, and would like to set up site-to-site VPN between the remote sites and a main site (all running ClearOS enterprise 5.2sp1 / latest version). I have found references for how to set up ClearOS to VPN in to devices such as cisco for IPSEC, and others with PPTP. But for these methods it did not mention how you might configure 2 ClearOS boxes to talk to each other ipsec or pptp. I also saw documentation on installing OpenVPN and using the OpenVPN client software to VPN in to the ClearOS box. I will probably use this for individual users to VPN in, but I have some small sites ( 1 to 10 users) that will have their own ClearOS box and need to create a site to site VPN link back to the main site's OpenVPN box. Is this possible, can you point me to docs, or other info or basically, how? A couple updates: I did find a thread that asks the same basic question, where the user has a vpn set up between the two clearos machines (after installing ipsec vpn modules), just not transporting traffic between the LANS - and the very last post claims you have to edit some files (/etc/ipsec.conf) and set leftnexthop rightnexthop values to %direct. After that, it's supposed to work. Could it be that simple? I also posted to clear foundation, and they pointed me to some documentation for setting up ipsec unmanaged vpn. This looks pretty good, but, I will most likely need to figure out how to handle a dynamic dns type setup at least on one end. Also, what does it mean by multi-wan? Finally, what happens when a vpn connection goes down exactly - someone has to reboot the box or ?

    Read the article

  • Recover files from corrupt filesystem

    - by Emile 81
    My situation: I have an older 80GB IDE internal hdd, with a few files on that I would like very much to recover: some word documents some latex documents (text files) and pictures (png, jpg, eps files) some other text documents and visual studio project files I had backed them (not the latex ones though) up using svn, but have not committed lately, and would loose a lot of work if I cant recover. the hdd seems to have lost its filesystem, i have no idea how it came about. I know it has/had 3 NTFS partitions, i know the files i want are on the second or third partition. I read http://superuser.com/questions/81877/recover-hard-disk-data Partition Find and Mount did not see all the partitions using intelligent scan TestDisk does (i think), I followed the step by step instructions here, but when I try to list the files it says: "Can't open filesystem, filesystem seems damaged." I'm not sure how to proceed here, as TestDisks wiki does not contain this error message afaik. I don't know if the hdd is gonna fail, or some prog has caused the filesystem to be corrupt, the hdd doesnt make a sound, so i guess that's good. I would like some guidance so I don't accidentally cause more damage. (eg. is it ok to let testdisk write the filesystem to disk? I'm pretty the partitions are listed ok, but not 100%)

    Read the article

  • Nginx dynamic upstream configuration / routing

    - by Dan Sosedoff
    I was experimenting with dynamic upstream configuration for nginx and cant find any good solution to implement upstream configuration from third-party source like redis or mysql. The idea behind it is to have a single file configuration in primary server and proxy requests to various app servers based on environment conditions. Think of dynamic deployments where you have X servers that are running Y workers on different ports. For instance, i create a new app and deploy. App manager selects a server and then rolls out a worker (Ruby/PHP/Python) and then reports the ip:port to the central database with status "up". At this time when i go to the given url nginx should proxy all requests to the specified ip:port upstream. The whole thing is pretty similar to what heroku does, except this proof-of-concept is not supposed to be production ready, mostly for internal needs. The easiest solution i found was using resolver with ruby-based DNS server. It works, nginx gets the IP address correctly, but the only problem is that you cant define port number for that IP. Second solution (which i havent tried yet) is to roll something else as a proxy server, maybe written in Erlang. In this case we need to use something to serve static content. Any ideas how to implement this in more flexible and stable way? P.S. Some research options: http://openresty.org/#DynamicRoutingBasedOnRedis https://github.com/nodejitsu/node-http-proxy

    Read the article

  • Exchange 2007 restore - Backup Exec Unable to Attach to a resource

    - by Andy
    I have been struggling with this one for months! Grateful for any advice. The setup is a windows 2003 server network, 4xservers on the domain. Two exchange 2007 servers (only one with mailboxes still on). Backup Exec (12.5) on a non-exchange server with agents on the others. Backup exec runs a full backup of exchange across the network well, at pretty reasonable speeds. However, when you try any kind of restore (individual emails, mailboxes or whole system restore - all to same location or to alternate server, RSG etc) the following message is received within about 10-15 secs of starting the job: Job ended: 24 December 2010 at 13:28:32 Completed status: Failed Final error: 0xe000848c - Unable to attach to a resource. Make sure that all selected resources exist and are online, and then try again. If the server or resource no longer exists, remove it from the selection list. Edit the selection list properties, click the View Selection Details tab, and then remove the resource. Final error category: Resource Errors For additional information regarding this error refer to link V-79-57344-33932 Things I have already tried: Changed account to main administrator account (with all permissions) checked versions of ese.dll on both servers - both the same Checked all VSS writers on both servers are stable / normal restoring to different locations Any advice anyone could give would be much appreciated. Many thanks, Andy

    Read the article

  • emacs, colors in term-mode

    - by valya
    Hello, I use Emacs and I run bash with M-x term command. There is a problem: colors in the *terminal* buffer aren't the same as in Gnome Terminal, and they are worse (do you need a screen shot?). How can I fix this? This is pretty annoying :-) Thank you! Linux Mint 9 Emacs 23.1.1 x86_64 __________________ /home/valentin/Work/buzzoola/buzzoola/test/vagrant [.../vagrant]$ echo $TERM eterm-color __________________ /home/valentin/Work/buzzoola/buzzoola/test/vagrant [.../vagrant]$ echo $LS_COLORS rs=0:di=01;34:ln=01;36:hl=44;37:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31 ;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31: *.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31 :*.gz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01 ;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jp eg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;3 5:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.p cx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01; 35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm =01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:* .xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00 ;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*. ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:

    Read the article

  • Imagemagick PDF to JPG conversion failing

    - by Scott
    I'm trying to convert the first page of a PDF to a JPG. I'm pretty sure I got this to work with certain PDFs, but is it really possible that certain PDFs are made incorrectly and cannot be converted? I tried running this first: $ convert 10-03-26.pdf[1] test.jpg And I got the follow: Error: /syntaxerror in readxref Operand stack: Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1 3 %oparray_pop 1 3 %oparray_pop --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- Dictionary stack: --dict:1062/1417(ro)(G)-- --dict:0/20(G)-- --dict:73/200(L)-- --dict:73/200(L)-- --dict:97/127(ro)(G)-- --dict:229/230(ro)(G)-- --dict:14/15(L)-- Current allocation mode is local ESP Ghostscript 7.07.1: Unrecoverable error, exit code 1 convert: Postscript delegate failed `10-03-26.pdf'. Running this instead: $ convert -verbose -colorspace rgb '10-03-26.pdf[1]' test.jpg I get the following: Error: /syntaxerror in readxref Operand stack: Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1 3 %oparray_pop 1 3 %oparray_pop --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- --nostringval-- Dictionary stack: --dict:1062/1417(ro)(G)-- --dict:0/20(G)-- --dict:73/200(L)-- --dict:73/200(L)-- --dict:97/127(ro)(G)-- --dict:229/230(ro)(G)-- --dict:14/15(L)-- Current allocation mode is local ESP Ghostscript 7.07.1: Unrecoverable error, exit code 1 "gs" -q -dBATCH -dSAFER -dMaxBitmap=500000000 -dNOPAUSE -dAlignToPixels=0 "-sDEVICE=pnmraw" -dTextAlphaBits=4 -dGraphicsAlphaBits=4 "-g792x1611" "-r72x72" -dFirstPage=2 -dLastPage=2 "-sOutputFile=/tmp/magick-XXU3T44P" "-f/tmp/magick-XXoMKL8Z" "-f/tmp/magic2eec1F"Start of Image Define Huffman Table 0x00 0 1 5 1 1 1 1 1 1 0 0 0 0 0 0 0 Define Huffman Table 0x01 0 3 1 1 1 1 1 1 1 1 1 0 0 0 0 0 Define Huffman Table 0x10 0 2 1 3 3 2 4 3 5 5 4 4 0 0 1 125 Define Huffman Table 0x11 0 2 1 2 4 4 3 4 7 5 4 4 0 1 2 119 End Of Image convert: Postscript delegate failed `10-03-26.pdf'. Why would the conversion fail? Just as an aside, this is happening on a (gs) Grid-Service on (mt) Media Temple hosting. I cannot install programs on the server, but both Imagemagick and Ghostscript are installed Thanks!

    Read the article

  • Open Source PDF reader for windows as an alternative to Adobe reader

    - by Tom Feiner
    With the latest javascript vulnerabilities in Adobe reader and bloat it has aquired over the years, I've been thinking of moving the network I'm in charge of to a different product for PDF reading on Windows. The ideal PDF reader should be something that is: Small in size (Adobe reader is more than 200MB these days after installation). As secure by default as possible (For example, javascript disabled by default). Nice looking and easy to use interface. Not bloated with features (I just want to read PDFs, that's it). Does not install any toolbars/unwanted add ons/spyware. Does not display any ads while viewing PDFs. Preferably Open Source. (this pretty much ensures no ads). Full Unicode support. Idealy , something like evince from gnome, will be the best option, but unfortunately that's not available on Windows. Foxit is an option, as it is small, and has a nice interface. But it still has javascript enabled by default which might lead to vulnerabilities - and it installs a toolbar , and displays ads while reading PDFs which is distracting. There is a site dedicated to Open Source PDF readers, pdfreaders.org, however, the Windows pdf readers each have their problems, mostly the interface is not as convenient (as evince, adobe or foxit). Here's a list of all PDF software from WikiPedia. There's a "Viewers" section for each OS. What Windows PDF reader would you recommend ?

    Read the article

  • Auto load balancing two node Cluster Hyper-v 2008 R2 enterprise?

    - by Kristofer O
    My setup is a 2 node cluster with 72GB ram each and a ~10TB MD3000i Iscsi SAN. I have about 30VMs running I keep about 15 on either server. I do a live migration to the other server if I need to run updates or whatever... Either one of the servers is able of running all VM if needed, but the cpu is pretty high. Here's my issues. I know Hyper-v has a limit of a single Live-migration at a time. But Why doesn't it queue them up to move one at a time? If I multi select I don't get the option to live migrate a one at a time. OR if I'm in the process of Migrating one it will give me an error that it's currently migrating a VM... Is there a button I missed that will tell a Node that it needs to migrate all the VMs elsewhere? Another question, does anyone know whats the best way to load balance VMs based on CPU and/or network utilzation. I have some VMs that don't do much. and some that trash the CPU or network. I'd like to balance it out on both servers if at all possible. and Is there any way to automate it? last question... If I overcommit my Cluster is there a way to tell the cluster that I want certian VMs the be running and to savestate other VMs based on availible system resources? Say when my one node blue screens and the other node begins starting the VMs up. I want the unimportant ones to shutdown or savestate so the important ones can stay running or come back online. Thanks just for reading all that. Any help would be great.

    Read the article

  • rr.com appended to URL and I don't know why

    - by Steph
    I've been having some pretty bad and intermittent internet connectivity issues. I keep getting timeouts on my browsers, or what appears to be timeouts on my browsers. In Chrome it's generally error 21. FF it times out. And so on. While this is happening, I can go into the command line and ping or traceroute the same domain and it works fine. I use my cellphone on the same network and it's fine connecting to the same domain. And when it fails, it's all domains that are down in all browsers, chrome, FF, etc. I also noticed that when I try to connect to 192.168.1.1 it says in chrome did you mean www.192.168.1.1rr.com but when I enter https://192.168.1.1 it's fine. It only seems to do this in Chrome. This is making me think I have a virus. I did some researching about something called road runner, but I can't find any related traces. I also ran a full virus scan using nod32 (eset) and nothing. Any suggestion or help would be greatly appreciated. The intermittent loss of total internet access is really annoying and I'm worried about why it's trying to append the rr.com domain in Chrome. I suspect I'm dealing with two different issues, but you never know. Also for the DNS I'm using Google's DNS servers, the famous 8.8.8.8 and 8.8.8.4

    Read the article

  • Modem doesn't seem to pass internet connection to router

    - by Vian Esterhuizen
    I was supplied a Cisco DPC3825 modem by my ISP and use a Cisco E4200 as my main router. Today the Internet just stopped working, just out of the blue, I can't think of anything that would have triggered it. It's been working pretty good for the past month except for a few random blips here and there. After some trouble shooting I realized that a direct wired connection to the modem would get me Internet access but if I was wired to the router as I was before I would have no connection. Assuming it was router I connected a TP-Link WR841N but it had the exact same problem. Also, connecting to the router via Wi-Fi from multiple devices will connect me to the router but I still can't get access to the Internet. From these test, it seems that the modem just won't send Internet through to the router but is clearly connecting and able to directly connect to my PC. What I've Tried A full reset and factory reset on the E4200 A full reset on the modem (but I believe my ISP has remote access to the modem because the passwords and so on are always set back). My ISP has remotely reconfigured the modem What I Should Try What else can I try? What should I do to try to narrow down the issue? Can you figure out what the problem might be based on this information?

    Read the article

  • SQL Server transaction log backups,

    - by krimerd
    Hi there, I have a question regarding the transaction log backups in sql server 2008. I am currently taking full backups once a week (Sunday) and transaction log backups daily. I put full backup in folder1 on Sunday and then on Monday I also put the 1st transaction log backup in the same folder. On tuesday, before I take the 2nd transaction log backup I move the first transaction log backup from folder1 an put it into folder2 and then I take the 2nd transaction log backup and put it in the folder1. Same thing on Wed, Thurs and so on. Basicaly in folder1 I always have the latest full backup and the latest transaction log backup while the other transaction log backups are in folder2. My questions is, when sql server is about to take, lets say 4th (Thursday) transaction log backup, does it look for the previous transac log backups (1st, 2nd, and 3rd) so that this new backup will only include the transactions from the last backup or it has some other way of knowing whether there are other transac log backups. Basically, I am asking this because all my transaction log backups seem to be about the same size and I thought that their size will depend on the amount of transactions since the last transaction log backup. Example: If you have a, lets say, full backup and then you take a transac log backup and this transac log backup is lets say 200 MB and now you immediatelly take another transac log backup, this last transac log backup should be considerably smaller than the first one because no or almost no transaction occured between these two backups, right? At least, that's what I've been assuming. What happens in my case is that this second backup is pretty much the same size as the first one and I am wondering if the reason for that is because I moved the first transac log backup to a different folder so now sql server thinks that all I have is just a full backup and it then gets all the transactions that happened since the full backup and puts it in the 2nd transac log backup. Can anyone please explain if my assumptions are right? Thanks...

    Read the article

  • MaxStartups and MaxSessions configurations parameter for ssh connections?

    - by Webby
    I am copying the files from machineB and machineC into machineA as I am running my below shell script on machineA. If the files is not there in machineB then it should be there in machineC for sure so I will try copying the files from machineB first, if it is not there in machineB then I will try copying the same files from machineC. I am copying the files in parallel using GNU Parallel library and it is working fine. Currently I am copying 10 files in parallel. Below is my shell script which I have - #!/bin/bash export PRIMARY=/test01/primary export SECONDARY=/test02/secondary readonly FILERS_LOCATION=(machineB machineC) export FILERS_LOCATION_1=${FILERS_LOCATION[0]} export FILERS_LOCATION_2=${FILERS_LOCATION[1]} PRIMARY_PARTITION=(550 274 2 546 278) # this will have more file numbers SECONDARY_PARTITION=(1643 1103 1372 1096 1369 1568) # this will have more file numbers export dir3=/testing/snapshot/20140103 find "$PRIMARY" -mindepth 1 -delete find "$SECONDARY" -mindepth 1 -delete do_Copy() { el=$1 PRIMSEC=$2 scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. } export -f do_Copy parallel --retries 10 -j 10 do_Copy {} $PRIMARY ::: "${PRIMARY_PARTITION[@]}" & parallel --retries 10 -j 10 do_Copy {} $SECONDARY ::: "${SECONDARY_PARTITION[@]}" & wait echo "All files copied." Problem Statement:- With the above script at some point I am getting this exception - ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host And I guess the error is typically caused by too many ssh/scp starting at the same time. That leads me to believe /etc/ssh/sshd_config:MaxStartups and MaxSessions is set too low. But my question is on which server it is pretty low? machineB and machineC or machineA? And on what machines I need to increase the number? On machineA this is what I can find - root@machineA:/home/david# grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10:30:60 root@machineA:/home/david# grep MaxSessions /etc/ssh/sshd_config And on machineB and machineC this is what I can find - [root@machineB ~]$ grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10 [root@machineB ~]$ grep MaxSessions /etc/ssh/sshd_config #MaxSessions 10

    Read the article

  • Properly force SSL with .htaccess, no double authentication

    - by cwd
    I'm trying to force SSL with .htaccess on a shared host. This means there I only have access to .htaccess and not the vhosts config. I know you can put a rule in the VirtualHost config file to force SSL which will be picked up there (and acted upon first), preventing double authentication, but I can't get to that. Here's the progress I've made: Config 1 This works pretty well but it does force double authentication if you visit http://site.com - once for http and then once for https. Once you are logged in, it automatically redirects http://site.com/page1.html to the https coutnerpart just fine: RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteEngine on RewriteCond %{HTTP_HOST} !(^www\.site\.com*)$ RewriteRule (.*) https://www.site.com$1 [R=301,L] AuthName "Locked" AuthUserFile "/home/.htpasswd" AuthType Basic require valid-user Config 2 If I add this to the top of the file, it works a lot better in that it will switch to SSL before prompting for the password: SSLOptions +StrictRequire SSLRequireSSL SSLRequire %{HTTP_HOST} eq "site.com" ErrorDocument 403 https://site.com It's clever how it will use the SSLRequireSSL option and the ErrorDocument403 to redirect to the secure version of the site. My only complaint is that if you try and access http://site.com/page1.html it will redirect to https://site.com/ So it is forcing SSL without a double-login, but it is not properly forwarding non-SSL resources to their SSL counterparts. Regarding the first config, Insyte mentioned "using mod_rewrite to perform a simple redirect is a bit of overkill. Use the Redirect directive instead. It's possible this may even fix your problem, as I believe mod_rewrite rules are some of the last directives to be processed, just before the file is actually grabbed from the filesystem" I have not had no such luck on finding a force-ssl config option with the redirect directive and so have been unable to test this theory.

    Read the article

  • Triple boot WIndows 7, Windows 8, and Mountain Lion on Macbook Pro

    - by Nathan
    Ok, So I have a bit of a unique situation here I could use some help on. I've modded my summer 2011 MBPro to have 2 harddrives by replacing the optical drive. OSX Mountain Lion is installed on a single partition of a 120GB SSD. The second drive is 750GB, partitioned as 550GB, 150GB, and ~50GB. I've set the 550GB to act as my OSX homefolder, but I'd like to install windows 7 and Windows 8 on the remaining partitions. It Took a while, but by following this guide, I eventually found a way to install Windows without a CD/DVD drive by following this http://huguesval.com/blog/2012/02/installing-windows-7-on-a-mac-without-superdrive-with-virtualbox/ It worked flawlessly for creating both windows 7 and windows 8 images that I could clone onto FAT32 partitions. However, I have encountered a problem when trying to triple boot. After I put Windows 8 onto the ~50GB partition and tried to boot into windows 7 I get an error that says something like: error: 0x0000000e The Boot selection failed because the required device is inaccessible. If I re-clone the windows 7 image onto the drive and select the option to "replace BCD" file for the drive, windows 7 will boot but windows 8 now gives me the same exact error. I realize this is a pretty extensive setup, but if anyone has some insight I'd love to hear it.

    Read the article

  • Server configuration advice for new site that could get lots of traffic within 6m

    - by alchemical
    We're setting up a new web2.0 type site with elements of e-commerce. Budget is kind of tight. Due to the nature of the site and promotions, etc., we expect traffic could ramp up fairly quickly. Looking for advice for a good configuration to start with, we' looking to co-lo with CalPop in downtown LA. We've looked at Dell, ABMX.com, and got a quote from CalPop (they make their own servers as they also do managed hosting). Price range has been anywhere from about $1200-$3300 per server. We're thinking to start with a web server and db server, both with mirrored drives. It would be nice to stay under about 2k per server if possible. Min configuration for each would probably be a quad-core with 8GB Ram. Thinking to run Windows Server 2008 R2 (Web Edition?) and SQL Server 2008. Looking for advice on the best server configurations and/or brands that fit the budget, yet will allow us to smoothly scale as traffic increases. Reliability is also pretty important. Also wondering if a switch/router is necessary or useful to connect the two servers.

    Read the article

  • Vmware Player 3.0 - cannot ping 32 bits guest from 64 bits (guest or host)

    - by npmj
    I'm stuck with what seems a bug in VmWare Player (build 203739). I'm using W7 Ultimate 64bits as host and have a CentOS 5.4 (64 bits) as a guest and a Windows XP Professional SP3 (32 bits) as another guest. From the 64 bits machines (the host and the linux guest) I cannot ping the windows XP. Off course, I already turned off the windows firewall in the guest and also in the host. The network is pretty basic, I'm using Vmnet8 (NAT), with DHCP and port forwarding (to the windows XP's IP). Everything is working ok, I have internet access from host and from both guests. Port forwarding to the XP guest is working ok too. The only problem is that I cannot access the XP guest through the Vmnet8. I monitored the traffic using wireshark (in the host and in the windows guest). If I try to ping the XP guest from the host, what I see is the ARP request leaving the host, being answered by the guest and, after that, there is no echo request leaving the host. The same occurs if I try to ping the XP from the CentOs guest. From the windows XP guest I can ping both the host and the CentOs guest. From the XP guest I can access the host shares. Obviously, from the host I cannot see the XP shares (as I cannot even ping the guest). I want to maintain this setup (using NAT to share the host's internet connection). Any suggestions?

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >