Search Results

Search found 8849 results on 354 pages for 'cloud hosting'.

Page 312/354 | < Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >

  • Why would e-mail from our own domain not be forwarded to gmail

    - by netboffin
    To solve a problem with spam on our server we tried to forward e-mail from our dedicated server's mailserver(matrix smtp service) to gmail, but while most e-mails got through e-mail from our own domain all went missing. They weren't in the inbox or spam or anywhere else. We've had to go back to using the old system, which means my boss gets a huge amount of spam. We have a windows 2003 server with iis 6 and the matrix smtp service installed. I've toyed with the idea of installing a mail proxy like ASSP but it looks pretty complicated. We're hosting 20 domains on the server as well as our own which has an online shop whose payment system depends on email. I can't start playing around with complicated solutions when it could have disastrous consequences and I don't know enough to implement them safely. So my question has two parts: Part One: Why can't we forward e-mails from people using the same domain. If our domain was foobar.com then [email protected] can't receive from [email protected], but he can receive from everyone else. Part Two: Is there a really simple server side solution to spam that would work with matrix? For instance popfile?

    Read the article

  • Can I use iptables on my Varnish server to forward HTTPS traffic to a specific server?

    - by Dylan Beattie
    We use Varnish as our front-end web cache and load balancer, so we have a Linux server in our development environment, running Varnish with some basic caching and load-balancing rules across a pair of Windows 2008 IIS web servers. We have a wildcard DNS rule that points *.development at this Varnish box, so we can browse http://www.mysite.com.development, http://www.othersite.com.development, etc. The problem is that since Varnish can't handle HTTPS traffic, we can't access https://www.mysite.com.development/ For dev/testing, we don't need any acceleration or load-balancing - all I need is to tell this box to act as a dumb proxy and forward any incoming requests on port 443 to a specific IIS server. I suspect iptables may offer a solution but it's been a long while since I wrote an iptables rule. Some initial hacking has got me as far as iptables -F iptables -A INPUT -p tcp -m tcp --sport 443 -j ACCEPT iptables -A OUTPUT -p tcp -m tcp --dport 443 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to 10.0.0.241:443 iptables -t nat -A POSTROUTING -p tcp -d 10.0.0.241 --dport 443 -j MASQUERADE iptables -A INPUT -j LOG --log-level 4 --log-prefix 'PreRouting ' iptables -A OUTPUT -j LOG --log-level 4 --log-prefix 'PostRouting ' iptables-save > /etc/iptables.rules (where 10.0.0.241 is the IIS box hosting the HTTPS website), but this doesn't appear to be working. To clarify - I realize there's security implications about HTTPS proxying/caching - all I'm looking for is completely transparent IP traffic forwarding. I don't need to decrypt, cache or inspect any of the packets; I just want anything on port 443 to flow through the Linux box to the IIS box behind it as though the Linux box wasn't even there. Any help gratefully received... EDIT: Included full iptables config script.

    Read the article

  • RAID 10 or RAID 5 for multiple VMs - what is the best choice?

    - by Lars Fastrup
    I have just ordered a new rig for my business. We do a lot of software development for Microsoft SharePoint and need the rig to run several virtual machines for development and test purposes. We will be using the free VMware ESXi for virtualization. For a start, we plan to build and start the following VMs - all with Windows Server 2008 R2 x64: Active Directory server MS SQL Server 2008 R2 Automated Build Server SharePoint 2010 Server for hosting our public Web site and our internal Intranet for a few people. The load on this server is going to be quite insignificant. 2xSharePoint 2007 development server 2xSharePoint 2010 development server Beyond that we will need to build several SharePoint farms for testing purposes. These VMs will only be started when needed. The specs of the new rig is: Dell R610 rack server 2xIntel XEON E5620 48GB RAM 6x146GB SAS drives Dell H700 RAID controller We believe the new server is going to make our VMs perform a lot better than our existing setup (2xIntel XEON, 16GB RAM, 2x500 GB SATA in RAID 1). But we are not sure about the RAID level for the new rig. Should we go for having the the 6x146GB SAS drives in a RAID 10 configuration or a RAID 5 configuration? RAID 10 seems to offer better write performance and lower risk of a RAID failure. But it comes at a cost of less drive space. Do we need RAID 10 or would RAID 5 also be a good choice for us?

    Read the article

  • Windows 8.1 in a VMWare Workstation 10 guest's mouse is missing, but only sometimes

    - by Rob Perkins
    I have VMWare Workstation 10 running on a Windows 7 machine, hosting a k guest OS. Before upgrading to WS 10 I was using version 9, and the Win8 guest OS ran without difficulty or error conditions. Since upgrading and installing the most current VMWare Tools inside the guest after upgrading to version 10, there are circumstances where the mouse pointer is not visible; the mouse position appears stuck at a screen location which is not the center of the virtualized display; and mouse click and scrolling events still get processed. Once this begins happening I have to reboot the host machine to get it to stop. (VMWare Tools 9.6.1 build-1378637 is what the WS 10 software installed) The problem seems to correlate with whether the mouse is captured during Win 8.1's bootup process, before control is passed to the login screen. If I explicitly click the mouse into the guest OS and move it slowly around while the system is booting, then I see the mouse after clicking to lift the first screen and expose the password prompt, and there is never a problem within the guest. If I don't do this during bootup, there is no mouse pointer, with the symptoms listed above. I have tried removing and reinstalling VMWare tools, and the other steps published for "mouse problems" from VMWare's chaotic troubleshooting database. The problem persists. Is there a setting in the virtual machine's configuration which could prevent this behavior?

    Read the article

  • MX records set up

    - by andrei.troll
    I just want to know how other people set up their MX-entries for mail accounts used with google apps. I work at a local web-hosting firm and we get a lot of tickets from clients who want to set up these settings. I just set them up something like: example.com. 14400 IN MX 10 ALT1.ASPMX.L.GOOGLE.COM. example.com. 14400 IN MX 10 ASPMX4.GOOGLEMAIL.COM. example.com. 14400 IN MX 15 ASPMX5.GOOGLEMAIL.COM. example.com. 14400 IN MX 15 ASPMX2.GOOGLEMAIL.COM. example.com. 14400 IN MX 30 ASPMX3.GOOGLEMAIL.COM. I see another firm (rival one) who sets up way more MX-records ? Roughly, around 10-15 entries. Am I doing something wrong ? More is better in this case ? Is there a secret that I'm not on too ?

    Read the article

  • While using an ntfs smb share for mac users, do symbolic links and extended attributes work?

    - by scape
    We have a majority of mac users but we'd rather support their file sharing using a Windows server with an ntfs drive, or at least a Linux server with ext3. We've had trouble, much trouble, utilizing the OS X server software and after the years are now looking to abandon it. What's mostly holding us back is the fact that the mac users very often utilize symbolic links and other special features that exist for an HFS+ partition. The shared locations are mostly primary storage and not just used as an archive storage location. While there is an option to create symbolic links under ntfs, I'm curious if there is anything I need to look out for if I were to move the files over to a new partition that's hosted from a Windows server from the HFS+ partition; in addition, how well creating a symbolic link from a mac might work. I am also worried about windows backup software and if it will ruin these special sym links, and how placing permissions on sub-folders will work. Alternatively I could remotely backup the files using a mac and Bru, nonetheless I still want to get away from mac server for hosting the shares.

    Read the article

  • Caching all files in varnish

    - by csgwro
    I want my varnish servers to cache all files. At backend there is lighttpd hosting only static files, and there is an md5 in the url in case of file change, ex. /gfx/Bird.b6e0bc2d6cbb7dfe1a52bc45dd2b05c4.swf). However my hit ratio is very poorly (about 0.18) My config: sub vcl_recv { set req.backend=default; ### passing health to backend if (req.url ~ "^/health.html$") { return (pass); } remove req.http.If-None-Match; remove req.http.cookie; remove req.http.authenticate; if (req.request == "GET") { return (lookup); } } sub vcl_fetch { ### do not cache wrong codes if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } remove beresp.http.Etag; remove beresp.http.Last-Modified; } sub vcl_deliver { set resp.http.expires = "Thu, 31 Dec 2037 23:55:55 GMT"; } I have made an performance tuning: DAEMON_OPTS="${DAEMON_OPTS} -p thread_pool_min=200 -p thread_pool_max=4000 -p thread_pool_add_delay=2 -p session_linger=100" The main url which is missed is... /health.html. Is that forward to backend correctly configured? Disabling health checking hit ratio increases to 0.45. Now mostly "/crossdomain.xml" is missed (from many domains, as it is wildcard). How can I avoid that? Should I carry on other headers like User-Agent or Accept-Encoding? I thing that default hashing mechanism is using url + host/IP. Compression is used at the backend. What else can improve performance?

    Read the article

  • Iptables ignoring a rule in the config file

    - by Overdeath
    I see lot of established connections to my apache server from the ip 188.241.114.22 which eventually causes apache to hang . After I restart the service everything works fine. I tried adding a rule in iptables -A INPUT -s 188.241.114.22 -j DROP but despite that I keep seeing connections from that IP. I'm using centOS and i'm adding the rule like thie: iptables -A INPUT -s 188.241.114.22 -j DROP Right afther that I save it using: service iptables save Here is the output of iptables -L -v ` Chain INPUT (policy ACCEPT 120K packets, 16M bytes) pkts bytes target prot opt in out source destination 0 0 DROP all -- any any lg01.mia02.pccwbtn.net anywhere 0 0 DROP all -- any any c-98-210-5-174.hsd1.ca.comcast.net anywhere 0 0 DROP all -- any any c-98-201-5-174.hsd1.tx.comcast.net anywhere 0 0 DROP all -- any any lg01.mia02.pccwbtn.net anywhere 0 0 DROP all -- any any www.dabacus2.com anywhere 0 0 DROP all -- any any 116.255.163.100 anywhere 0 0 DROP all -- any any 94.23.119.11 anywhere 0 0 DROP all -- any any 164.bajanet.mx anywhere 0 0 DROP all -- any any 173-203-71-136.static.cloud-ips.com anywhere 0 0 DROP all -- any any v1.oxygen.ro anywhere 0 0 DROP all -- any any 74.122.177.12 anywhere 0 0 DROP all -- any any 58.83.227.150 anywhere 0 0 DROP all -- any any v1.oxygen.ro anywhere 0 0 DROP all -- any any v1.oxygen.ro anywhere Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 186K packets, 224M bytes) pkts bytes target prot opt in out source destination `

    Read the article

  • How long will a "safely stored" Solid-State-Drive (SSD) keep its data? (e.g. bank safety-deposit box)

    - by user31575
    Here's my usecase: once-and-only-once copy off photos/videos to an internal SATA Solid State Drive (SSD) put this drive in a well-ventilated, air-conditioned bank "safety deposit box" for safe keeping The question: How long can I safely store a solid-state-drive in such an environment? i.e. 0% bitrot, 100% success when "plugged in" Are some SSD drives more reliable than other for this usecase? (e.g. smaller size vs larger size, SLC vs MLC, different brands, etc) More fodder: I have read that solid state memory cards (e..g compactflash, or sd cards) have much longer durability than other media (DVD's, CD's, hard drives) for this usecase (guaranteed against bitrot/other dysfunction on the order of ~ a decades vs a year ). I don't know if this applies to "SSD hard drives". Copying to one 500Gb ssd vs 8 64gb flash drives is easier SSD SATA hard drives have no moving parts, but they have more "visible electronics" than a compact flash card. I don't know if this "visible electronics" can fail, i.e. in contr I know many will point to carbonite, other cloud backup stuff, but I like the simplicity of having physical copies and wanted to understand the risks/implications thanks,

    Read the article

  • Creating multiple SFTP users for one account

    - by Tom Marthenal
    I'm in the process of migrating an aging shared-hosting system to more modern technologies. Right now, plain old insecure FTP is the only way for customers to access their files. I plan on replacing this with SFTP, but I need a way to create multiple SFTP users that correspond to one UNIX account. A customer has one account on the machine (e.g. customer) with a home directory like /home/customer/. Our clients are used to being able to create an arbitrary number of FTP accounts for their domains (to give out to different people). We need the same capability with SFTP. My first thought is to use SSH keys and just add each new "user" to authorized_keys, but this is confusing for our customers, many of whom are not technically-inclined and would prefer to stick with passwords. SSH is not an issue, only SFTP is available. How can we create multiple SFTP accounts (customer, customer_developer1, customer_developer2, etc.) that all function as equivalents and don't interfere with file permissions (ideally, all files should retain customer as their owner)? My initial thought was some kind of PAM module, but I don't have a clear idea of how to accomplish this within our constraints. We are open to using an alternative SSH daemon if OpenSSH isn't suitable for our situation; again, it needs to support only SFTP and not SSH. Currently our SSH configuration has this appended to it in order to jail the users in their own directories: # all customers have group 'customer' Match group customer ChrootDirectory /home/%u # jail in home directories AllowTcpForwarding no X11Forwarding no ForceCommand internal-sftp # force SFTP PasswordAuthentication yes # for non-customer accounts we use keys instead Our servers are running Ubuntu 12.04 LTS.

    Read the article

  • Server slowdown

    - by Clinton Bosch
    I have a GWT application running on Tomcat on a cloud linux(Ubuntu) server, recently I released a new version of the application and suddenly my server response times have gone from 500ms average to 15s average. I have run every monitoring tool I know. iostat says my disks are 0.03% utilised mysqltuner.pl says I am OK other see below top says my processor is 99% idle and load average: 0.20, 0.31, 0.33 memory usage is 50% (-/+ buffers/cache: 3997 3974) mysqltuner output [OK] Logged in using credentials from debian maintenance account. -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.63-0ubuntu0.10.04.1-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 370M (Tables: 52) [--] Data in InnoDB tables: 697M (Tables: 1749) [!!] Total fragmented tables: 1754 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 19h 25m 41s (1M q [28.122 qps], 1K conn, TX: 2B, RX: 1B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 1.0G global + 2.7M per thread (500 max threads) [OK] Maximum possible memory usage: 2.4G (30% of installed RAM) [OK] Slow queries: 0% (1/1M) [OK] Highest usage of available connections: 34% (173/500) [OK] Key buffer size / total MyISAM indexes: 16.0M/279.0K [OK] Key buffer hit rate: 99.9% (50K cached / 40 reads) [OK] Query cache efficiency: 61.4% (844K cached / 1M selects) [!!] Query cache prunes per day: 553779 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 34K sorts) [OK] Temporary tables created on disk: 4% (4K on disk / 102K total) [OK] Thread cache hit rate: 84% (185 created / 1K connections) [!!] Table cache hit rate: 0% (256 open / 27K opened) [OK] Open file limit used: 0% (20/2K) [OK] Table locks acquired immediately: 100% (692K immediate / 692K locks) [OK] InnoDB data size / buffer pool: 697.2M/1.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) table_cache (> 256)

    Read the article

  • SFTP, SCP, Secure Webdav: which is the most suitable ?

    - by Xavier Maillard
    Hi, currently, I am hosting a webdav share setup in order to store files I need anywhere I am. It is available via HTTPS. Things are that I do not need all the HTTP machinery -i.e. my nginx http server is only there for this webdav folder. I am not sure I made the best choice. My requirements on the client side are: secured transfers mountable as a network drive at work with 'near realtime sync' usable for any OS I could use (including my mobile (android)) At first, I chose webdav since it would pass through my work proxy (which refuses all that is not on HTTP/S (port 80 or 443)). Today, I am not satisfied with the setup and even if nginx memory footprint is pretty small, its webdav support is not really "clean" and full. What would you recommend between SFTP, SCP and the current webdav solution ? I think SFTP is the closest solution but I still have to find out how to pass through my proxy ;) SCP seems quite limited as I read about it (only file transfers if I read right). Cheers

    Read the article

  • Inbox not updating in Exchange 2010, all users affected

    - by TuxMeister
    I'm battling against this darn issue this morning. We have the following setup: Big Hyper-V machine hosting the servers as VM's VM for CAS: WEB.XXX.local VM for Mailbox: EXC.XXX.local Servers are running Server 2008 R2 with Exchange 2010 SP1 Clients are all running Windows 7 Pro x64 with Outlook 2010 x64 The problem we're having is that nobody is able to see any emails received today (16th of October), but they are able to send externally. When I reply back to the email received externally, I don't get an NDR, yet the user cannot see my email. This is what I found and tried thus far: If we create a subfolder in Outlook 2010 and move any email from the inbox into that folder, changes will be immediately reflected in OWA We've been sending test emails to other users internaly and external email addresses and the sent items folder contains all those tests, synced properly to OWA as well Have tried crating a new profile, new emails are still missing Tried disabling Cache Mode, still no luck Also disabled "Download shared folders", still no luck Tried to setup a brand new Exchange mailbox and configured it on a VM that never had Outlook on it, still the same issue Tried restarting Exchange services on both CAS and Mailbox servers, no luck Tried rebooting both CAS and Mailbox servers, still no luck Performed a Mailbox Discovery on my admin account, emails from today are being found in the Discovery results, so the stuff is there, just not updating the user inboxes Any idea about what this hellish thing can be? I've done everything I can think of and also everything I could find out there. Let me know if you need any more details and thanks for reading this!

    Read the article

  • nginx giving 404 when accessing php from alias directory

    - by code90
    I am trying to migrate from apache to nginx. The php sites that I am hosting need to access a shared library which turns out to be an alias directory. Below is the configuration I came up with. html files work fine, but php files giving 404. I have read through and tried most (if not all) of the answers to the similar questions with no any success. Any hint on what might be causing the issue in my case? location /wtlib/ { alias /var/www/shared/wtlib_4/; index index.php; } location ~ /wtlib/.*\.php$ { alias /var/www/shared/wtlib_4/; try_files $uri =404; if ($fastcgi_script_name ~ /wtlib(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass 127.0.0.1:9013; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/shared/wtlib_4$valid_fastcgi_script_name; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } Thanks all ! Update: Following seems to be working fine: location /wtlib/ { alias /usr/share/php/wtlib_4/; location ~* .*\.php$ { try_files $uri @php_wtlib; } location ~* \.(html|htm|js|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air)$ { expires 7d; access_log off; } } location @php_wtlib { if ($fastcgi_script_name ~ /wtlib(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4$valid_fastcgi_script_name; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; }

    Read the article

  • Web based KVM management for Ubuntu

    - by Tim
    We've got a single Ubuntu 9.10 root server on which we want to run multiple KVM virtual machines. To administer these virtual machines I'd like a web based KVM management tool, but I don't know which one to choose from the list of tools mentioned on linux-kvm.org. I've used virsh & virt-manager on my desktop, but would like a web interface for the server. I tested ConVirt on my desktop, but it failed to pickup KVM machines from virsh / virt-manager, and I could not get KVM virtual machine import to work (only Xen). oVirt looks good, but I can't find out if and how I can install it on Ubuntu 9.10.. (And I'd really rather not waste another few days on testing stuff that might not work in the end.) Can anyone recommend any good web based KVM management tools that are easy to install on Ubuntu 9.10? I'm looking for something that will also allow me to run other services like apache and postgresql besides hosting virtual machines, so preferably fairly lightweight & no dedicated OS installs. We don't need any professional clustering / migration or anything, just something that will let us create, start, inspect, administer & stop virtual machines from a web page. Best regards, Tim Update: Anyone have any suggestions? It's awfully quiet here..

    Read the article

  • IP addresses not listed for IIS website bindings

    - by Svinn
    Recently purchased a windows cloud server godaddy. Now i installed iis7 and all other required software. And i have 50.62.1.89 and 2 more public ips. Also i have a private ip 10.1.0.2. Now the problem is am unable to access any website through any public ip. All my public ips are opening default website only. also i cant see pubic ips for IIS website bindings. Only my private ip listed for IIS binding. And in my server also public opening only default website. But am able to open websites using private ip. But my public ip addresses pointed to my server correctly. am able to open my server using remote desktop using public ip. Also as i said already public ip opening default website from IIS without problem. Please help me. Am confused for last 2 days.

    Read the article

  • Puppet write hosts using api call

    - by Ben Smith
    I'm trying to write a puppet function that calls my hosting environment (rackspace cloud atm) to list servers, then update my hosts file. My get_hosts function is currently this: require 'rubygems' require 'cloudservers' module Puppet::Parser::Functions newfunction(:get_hosts, :type => :rvalue) do |args| unless args.length == 1 raise Puppet::ParseError, "Must provide the datacenter" end DC = args[0] USERNAME = DC == "us" ? "..." : "..." API_KEY = DC == "us" ? "..." : "..." AUTH_URL = DC == "us" ? CloudServers::AUTH_USA : CloudServers::AUTH_UK DOMAIN = "..." cs = CloudServers::Connection.new(:username => USERNAME, :api_key => API_KEY, :auth_url => AUTH_URL) cs.list_servers_detail.map {|server| server.map {|s| { s[:name] + "." + DC + DOMAIN => { :ip => s[:addresses][:private][0], :aliases => s[:name] }}} } end end And I have a hosts.pp that calls this and 'should' write it to /etc/hosts. class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { host{ $name: ip => $name[ip], host_aliases => $name[aliases] } } As you can imagine, this isn't currently working and I'm getting a 'Symbol as array index at /etc/puppet/manifests/hosts.pp:2' error. I imagine, once I've realised what I'm currently doing wrong there will be more errors to come. Is this a good idea? Can someone help me work out how to do this?

    Read the article

  • GIT Website Deployment

    - by Brian
    I am attempting to setup GIT to deploy my project to different locations based on the branch. (I think this is what I want to do anyway). My current setup is this: Local dev machine running Netbeans to make changes. Remote server hosting GIT projects (same server running apache) - 2 subsites exist a test.FQDN.com and a live.FQDN.com What I would like to do is have 1 GIT project (MyProject) and create a new feature branch. Any commits done to the new feature branch would push to test.FQDN.com. Once the features have been tested and then merged into the master branch, it would push to live.FQDN.com. I have looked at GIT's post-receive hooks and was able to use "git checkout -f" command to pull on the test.FQDN.com site however that only pulls the master branch and not the new feature branch. I do not have any funding to use a third party to make this work, and would prefer to stay within GIT but have full root access to the web server if there is a package to install which would help control this. Any suggestions would be great!

    Read the article

  • ERROR: Not enough space?

    - by dsmoljanovic
    Now this is a very unspecific question. I'm trying to figure out what this message would mean. Here is the story behind it: I'm installing Oracle enterprise manager cloud control (12c r3) on Solaris 10 (5/09). Installer opens up, i enter all needed information and at the last step click Install. It immediately crashes with only "ERROR: Not enough space" written in log and console and nothing else. Now, this could be java error or Solaris error? I'm thinking it's happening either when it starts to copy files or when it tries to launch a process that would do that. What space is it referring to? disk (have ehough), swap (also), memory (yep)... Any ideas are helpful. Edit: i found this exception in the oraInventory logs: oracle.sysman.oii.oiic.OiicInstallAPIException: Not enough space at oracle.sysman.oii.oiic.OiicAPIInstaller.initInstallSession(OiicAPIInstaller.java:2165) at oracle.sysman.oii.oiic.OiicAPIInstaller.initOUIAPISession(OiicAPIInstaller.java:790) at oracle.sysman.install.oneclick.EMGCOUIInstaller.prepareForInstall(EMGCOUIInstaller.java:676) at oracle.sysman.install.oneclick.EMGCSummaryDlgonNext$1.run(EMGCSummaryDlgonNext.java:243) at java.lang.Thread.run(Thread.java:662) at oracle.sysman.install.oneclick.EMGCSummaryDlgonNext.actionsOnClickofNext(EMGCSummaryDlgonNext.java:1067) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at oracle.sysman.install.oneclick.EMGCUtil.performonClickOfNextForClass(EMGCUtil.java:399) at oracle.sysman.install.oneclick.EMGCUtil.performPageLevelValidationsForSilentInstall(EMGCUtil.java:367) at oracle.sysman.install.oneclick.EMGCInstaller.prepareForSilentInstall(EMGCInstaller.java:1459) at oracle.sysman.install.oneclick.EMGCInstaller.main(EMGCInstaller.java:1553) disk status: bash-3.00$ df -h /tmp Filesystem size used avail capacity Mounted on swap 8.1G 2.7G 5.4G 33% /tmp bash-3.00$ df -h /u01 Filesystem size used avail capacity Mounted on / 275G 28G 244G 11% / swap: root@gs12emcc # swap -s total: 18306040k bytes allocated + 3837808k reserved = 22143848k used, 5712664k available

    Read the article

  • iCloud backup merges or overwrites?

    - by Joe McMahon
    The following happened today (it was six AM my time, so yeah, I was dumb and dropped stitches in this process): A friend had a problem with her iPhone and needed to reset it. Unfortunately she did the reset while connected to iTunes and the restore process kicked in. In my sleepy state, I told her to go ahead. She did, and restored the most recent local (iTunes) backup (from July last year - she doesn't back up often, as she has an Air which is pretty full). During setup on the phone, she was prompted to merge data with the iCloud copy, and did so. There was no "restore from iCloud" prompt. Obviously I should have made sure she was disconnected from iTunes before she did the reset, or had her set it up as a new device and then restored from iCloud, but water under the bridge now. (Side question: could I have had her disconnect and then restart the phone again and avoid this whole process?) The question is: was the "merge" that happened in this process a true merge, or a replace? Her passwords for Mail were wrong, since they were the old ones from the old backup. If she does the wipe data and restore from iCloud, will she get her old SMSes and calendar entries back? Or did the merge decide that the phone, despite it being "old" was right and therefore the SMSes, calendar entries, etc. were discarded? As a recovery option, I have a 4-day-old iTunes backup here from ~/Library/Application Support/MobileSync/Backup, but she and the phone are 3000 miles away, and it's 8GB, so I can't easily restore it for her. I do have the option of encrypting it and mailing it on a data stick if the iCloud backup is now toast. Should she try the wipe and restore from the cloud (after backing up locally), or should I just get the more-recent backup in the mail? My goal is to get everything (especially the SMSes) back to the most recent version possible.

    Read the article

  • Why does BitLocker need a minimum volume size of 64 MB?

    - by Iszi
    Since the future of TrueCrypt appears to be still unclear, I figured I'd try to get my stuff migrated into BitLocker at least for the time being. I nearly never have to access my encrypted data from anything that's not BitLocker-capable, so cross-platform compatibility isn't a big deal to me at this time. However, I am having a bit of an issue understanding the minimum requirement of a 64 MB volume. With TrueCrypt, I was able to protect small files (and most of my protected files are fairly small) in containers down to 300 KB or even less. When I finally created a VHD of an appropriate size last night (100 MB), it seemed the file system itself only took up about 3 MB and encrypting it with BitLocker didn't appear to take up any more. While 3 MB is still an order of magnitude larger than the smallest volume I could make with TrueCrypt, it's still relatively reasonable in comparison to 64 MB. This is an especially large amount of overhead (and largely wasted at that, since it's mostly empty space for now) when I consider that some of these volumes will be stored and synced in the cloud. What possible reasons could BitLocker have for needing volumes to be 64 MB large, when it's not even appearing to use that space? BitLocker FAQ on TechNet

    Read the article

  • Google Chrome no longer treats " Web Apps" specially

    - by Adrian Petrescu
    I'm running Google Chrome (Dev Channel), with the --enable-apps flag, in both OSX and Ubuntu. I have four or five WebApps installed, and they appear in the "New Tab" page just fine. The problem is that, before, when the feature first became available in the Dev Channel, the actual tabs hosting the webapps received special treatment; they would have 3D Dock-like look, and (more importantly) the tab bar would be hidden while using that tab. Sometime in the last few weeks, however, it seems that the special treatment just disappeared with one of the daily updates. The webapps still show up in the New Tab page, they still work in the sense that they capture all URLs going to that webapp, and they use the right icons; but they've basically become indistinguishable from just a regularly pinned tab. The two special features mentioned above have disappeared, on both Ubuntu and OS X. My questions are simply: a) Does this happen to anyone else? When exactly did it begin? b) Why did Google regress the feature? c) Is there any flag I can enable to get it back?

    Read the article

  • Is it bad to redirect http to https?

    - by jasondavis
    I just installed an SSL Certificate on my server. I use a web hosting panel called ZPanel that is an open source project. It then set up a redirect for all traffic on my domain on Port 80 to redirect it to Port 443. In other words, all my http://example.com traffic is now redirected to the appropriate https://example.com version of the page. The redirect is done in my Apache Virtual Hosts file with something like this... RewriteEngine on ReWriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L] My question is, are there any drawbacks to using SSL? Since this is not a 301 Redirect, will I lose link juice/ranking in search engines by switching to https? I appreciate the help. I have always wanted to set up SSL on a server, just for the practice of doing it, and I finally decided to do it tonight. It seems to be working well so far, but I am not sure if it's a good idea to use this on every page. My site is not eCommerce and doesn't handle sensitive data; it's mainly for looks and the thrill of installing it for learning. UPDATED ISSUE Strangely Bing creates this screenshot from my site now that it is using HTTPS everywhere...

    Read the article

  • Synchronize Dreamweaver over an SSH tunnel using an SFTP connection

    - by Aeo
    Maybe... Just maybe... I'm asking too much here. Maybe I'm even barking up the wrong tree. I'm looking to essentially have Dreamweaver establish an SSH tunnel to one machine, and then use that connection to synchronize a site that is on another machine entirely. Now for some details: We've got two connections here at work. We've got our office connection for day to day business, and then we've got some fancy connection hosting our web servers upstairs. For the most part they've been mutually exclusive until recently. We had been establishing an SFTP connection to synchronize our web sites by going out over the office connection to the web and coming back in over the fancy connection to our servers upstairs. Recently -ish, we established a LAN connection to one of our servers that makes a pleasant change in VNC connection quality. Thanks to Vinagre, this makes it really easy to connect to any of our servers over this LAN connection via SSH tunnel for VNC. However, in spite of that new addition of a LAN connection, we still synchronize over the 'net. Out the office connection and in on the fancy one upstairs. I'm looking to change this. I'd like to get Dreamweaver to first tunnel over our LAN connection to the servers, and then go from there to whatever connection it needs to. Am I asking too much? The current set up: Dreamweaver is installed on Windows XP which is running within VirtualBox on top of Ubuntu 10.10. The network connection for VirtualBox is currently made in NAT mode, but could easily be switched to a Bridged Connection should it need be. The LAN connection is to 1 of 5 servers running CentOS 5.

    Read the article

  • Dns - wildcard vs. cname subdomains

    - by Matthew
    Alright I have to admit I'm confused with how DNS works. I've always just added things until they worked, and now it's time to learn how they work. So one confusing thing to me is that there's sort of two places I can have records. I have an account with rackspace cloud servers. And then there's the place I registered the domain. But both allow me to edit DNS records. Should I do everything at both places or is one better than the other or am I missing the point? Subdomains confuse me too. I'd like to be able to just have a wildcard subdomain (I've done this in the past.) I just don't like the idea of adding a cname record or A record every time I need a new subdomain. Then I read this and it says: The exact rules for when a wild card will match are specified in RFC 1034, but the rules are neither intuitive nor clearly specified. This has resulted in incompatible implementations and unexpected results when they are used.

    Read the article

< Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >