Search Results

Search found 4772 results on 191 pages for 'complex'.

Page 132/191 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Turn Excel spreadsheet into a formula

    - by ?????? ??????????
    I have an Excel spreadsheet that has a complex computation that is not trivial to turn into a macro or a single-cell formula. The spreadsheet has a about 10 different inputs (values a human enters in different cells of the spreadsheet) and then it outputs 5 independent calculations (in different 5 cells) based on that input. There calculation is using some pre-entered data in the spreadsheet (about 100 different constants) and doing some look-ups on them. Now I would like to use this whole spreadsheet as a formula on a different spreadsheet to calculate a set of input values and produce the corresponding set of output values. Imagine this as creating different table with 10 columns for the input variables and 5 columns for the outputs, then copying each input into the other spreadsheet and copying back the output in the results table. For instance: - A1, A2, A3,... A10 are cells where someone enters values - through a series of calculations B1, B2, B3, B4 and B5 are updated with some formulas Can I use the whole series of calculations from A1..A10 into B1..B5 without creating one massive huge formula or a VBA macro? I want to have a set of input values in 100 rows from A100, B100, C100,... J100 onward. Then do some Excel magic that will: 1. copy the values from A100...J100 into A1 to A10 2. wait for the result to appear in B1 to B5 3. copy the values from B1 to B5 into K100 to O100 4. repeat steps 1 to 3 for all rows from 100 to 150

    Read the article

  • Puppet - Is it possible to use a global var to pull in a template with the same name?

    - by Mike Purcell
    I'm new to puppet. As such I am trying to work my way around the best way to setup my manifests that make sense. Following the DRY (don't repeat yourself) principle, I am trying to load common directives in one template, then load in environment specific directives from a file matching the environment. Basically like this: # nodes.pp node base_dev { $service_env = 'dev' } node 'service1.ownij.lan' inherits base_dev { include global_env_specific } class global_env_specific { include shell::bash } # modules/shell/bash.pp class shell::bash inherits shell { notify{"Service env: ${service_env}": } file { '/etc/profile.d/custom_test.sh': content => template('_global/prefix.erb', 'shell/bash/global.erb', 'shell/bash/$service_env.erb'), mode => 644 } } But every time I run puppet agent --test puppet complains that it can't find the shell/bash/$service_env.erb file, but I double checked that it exists. I know the var is accessible due to the notify statement outputting the expected value, so I suspect I am doing which is not allowed. I know I could have a single template.erb and pass variables to the template, which would work in this case because the custom.sh file is small and not many changes across environments, but for more complex configs (httpd, solr, etc) I'd prefer to access environment specific files. I am also aware that I can specify environment specific module paths, but I'd prefer to just handle this behavior at the template level, instead of having several, closely named directories. Thanks.

    Read the article

  • AWS own email domain and some generic questions

    - by John Brunner
    I'm getting started with Amazon Web Services and I have a few question I'm not sure about. As every (company) webpage I want to use an "[email protected]" email adress, but how is that done? I looked up at godaddy.com (for domain registration), the offer me an email adress like I want, but for 3 dollars per month. Is this possible with AWS? Because at AWS you have just a complex domain which is not very userfriendly or serious. Also I want to host my dynamic webpage on the amazon cloud, but I'm not sure if I'm doing that right. I've read many guides, and all I know is that I have to purchase a Elastic Compute Cloud, and a Simple Storage Service... and every guide is working with the basic linux package, why not Windows? Is it more expensive? I just want to host a mySQL Server for the dynamic webpage, which is reached over a normal domain. And one last question, if I sign up for an AWS account it asks me for an email account. But I found it a little bit unserious to write there my free-webmailer-adress... How is it done the normal way? Thanks in advance! Best regards, john.

    Read the article

  • Standards for documenting/designing infrastructure

    - by Paul
    We have a moderately complex solution for which we need to construct a production environment. There are around a dozen components (and here I'm using a definition of "component" which means "can fail independently of other components" - e.g. an Apache server, a Weblogic web app, an ftp server, an ejabberd server, etc). There are a number of weblogic web apps - and one thing we need to decide is how many weblogic containers to run these web apps in. The system needs to be highly available, and communications in and out of the system are typically secured by SSL Our datacentre team will handle things like VLAN design, racking, server specification and build. So the kinds of decisions we still need to make are: How to map components to physical servers (and weblogic containers) Identify all communication paths, ensure all are either resilient or there's an "upstream" comms path that is resilient, and failover of that depends on all single-points of failure "downstream". Decide where to terminate SSL (on load balancers, or on Apache servers, for instance). My question isn't really about how to make the decisions, but whether there are any standards for documenting (especially in diagrams) the design questions and the design decisions. It seems odd, for instance, that Visio doesn't have a template for something like this - it has templates for more physical layout, and for more logical /software architecture diagrams. So right now I'm using a basic Visio diagram to represent each component, the commms between them with plans to augment this with hostnames, ports, whether each comms link is resilient etc, etc. This all feels like something that must been done many times before. Are there standards for documenting this?

    Read the article

  • Which project management software for technophobes who've never worked with something like that?

    - by Ernst
    Hi, Our director has asked me to get something to manage projects. Note that so far we haven't used anything of the sort. I did not get very clear instructions yet, probably because she doesn't know exactly what we need either. My guess is, we'll only find out while using something. I've looked at some, openworkbench, ganttproject, and microsoft project. The latter has the advantage of easy importing of users from exchange, are there others that do that (even if not directly, easily)? I don't think it's a critical requirement though. We're using some other custom software where I have to add users manually anyway and we're small enough that it's maybe once a month that I have to add or remove a user. In any case, I'm not in favour of buying anything, since I'm skeptic about us actually succeeding in putting it to good use, and even if we do, we will only during usage discover what we need. We're also not a tech shop, most people vary from not very technically adept to technophobic. This means we need something very userfriendly. I prefer to stay away from online solutions, since we deal with sensitive information and I prefer to keep that in house, but I guess it would be acceptable for the trial period. An intranet site is an option though, but preferably something that is easy to set up with IIS. Xplanner plus and redmine I found too hard to set up for this experiment. Some other options I haven't yet tried to install, but which look to complex for our technophobes: Endeavour Software Project Management, Project-Open, Trac. Any suggestions? Thanks, Ernst

    Read the article

  • User-unique .vimrc file for servers as root user

    - by Scott
    I'm getting thrown into an IDE war at the office, where multiple users have root access on our servers, and like to have everything their own way with VIM. Unfortunately, we have our servers locked down enough to where if you want to do anything, you need to have root access. Obviously (although this is obviously frowned upon), we get tired of typing sudo before each command we type, which would require that we constantly type in our wonderfully complex passwords that are mandated on us over and over again, so naturally we all just execute the sudo su - command upon login to avoid all of this. Of course, when it comes to VIM and custom .vimrc files, we are often times stepping on someone else's custom .vimrc file, and we have some whacked out functionality in these files that users have that may overwrite functionality that we have no idea about, much less have the patience to learn either. When as root on a linux box, is there any way for all of us to still maintain our .vimrc file without having to overwrite the file over and over again every time someone wants to use VIM? Ideally, we have many virtual machines all with VIM installed, so a universal solution across all servers would be best, and we do have our Microsoft Windows user specific home directories mounted on the servers under /home/username. Any recommendations for accommodating this?

    Read the article

  • Best grep-like tool

    - by e-satis
    I do in file search a lot, and used to love grep. Then I learn the existence of egrep, so I switched to benefit from the advanced regexp. Then I discovered the Eclipse search tool. Much easier to use that grep. Then I found ack : fast, easy, powerful. And now I use grin, which is smooth for pythonistas. I know there is also a couple of this kind of tools with a GUI. So what tool do you use, and why do you think it's the best. Practical features generally are : fast to fire and use; speedy processing; automatically ignore useless files; colored output; output lines, filename, context; allow complex regexp; allow a custom filtering and ouput; GUI + command line intergation; let you open an editor from the result set. There are some related posts on SO : http://stackoverflow.com/questions/87350/what-are-good-grep-tool-for-windows http://stackoverflow.com/questions/981601/colorized-grep-viewing-the-entire-file-with-highlighting http://stackoverflow.com/questions/1028107/is-there-some-unix-util-that-will-allow-me-to-grep-multiple-files-with-little-type http://stackoverflow.com/questions/1027906/unix-find-grep-syntax-vs-awk

    Read the article

  • Limiting bandwith on an Windows 7 machine

    - by Mihai Damian
    I need to limit the bandwidth on my Windows 7 x64 machine. In the past (on XP) I've been able to use NetLimiter for similar tasks. However for some reason I can't get it to work anymore. For lower limits the bandwidth tests are able to exceed the limit by 10-50%; higher limits seem to be ignored completely and the bandwidth tests report download speeds of over 10 times the speed I set. I'm using speedtest.net and some similar service from my ISP for these tests. Anyway, I don't necessarily need a program as complex as NetLimiter since I only need to throttle my machine's bandwidth, not a specific program's. In case you are wondering why in the world I'd want to cripple my Internet speed, there is a funny story behind this. Long story short, my modem gets random disconnects. Tech support comes in, says my Internet speed is abnormally high and I must be using some tools to somehow make it go faster than it's supposed to and this messes up my modem. I check the connection with another computer and it seems that my PC is the only one in my network that gets abnormal speeds. I reinstall my OS, speed looks normal at first, after I install the batch of 50 or so updates, it goes back to abnormally high speeds and the disconnect problems are not solved. Now I don't have a clue if the explanation the tech team gave me was just a strategy to lay the blame on someone else, but I was trying to give them the benefit of the doubt and see what happens if I really reduce my speed to their specification. Any help appreciated.

    Read the article

  • What's throttling the database?

    - by Troels Arvin
    Hardware: Intel x86_64 with 192GB of RAM. OS: CentOS 5.4 x86_64. DBMS: DB2 v. 9.7.1 64 bit. During certain special workloads (e.g. parallel REORGs/RUNSTATs), I've seen the server transporting 450MB/s with 25000IO/s (yes, there is probably some storage system caching happening here) while all CPU cores were happily working in an even mix of usermode/wait. And disk benchmark tools can also bring some very satisfying bandwith and IO/s numbers to the table. On the other hand, we also have another scenario: A single rather complex query with at least one large table scan. db2's "list applications" reports that the query is Executing (not locked). IO: At most 10MB/s, 500 IO/s; CPU: two cores in 99.9% wait state, all other cores 100% idle. The tables which the query reads from have been altered to have LOCKSIZE=TABLE, so I would think that lock list work is zero. What's going on in such a situation? What tools/snapshots/... can I use to gain better insight in such a case?

    Read the article

  • Goal setting/tracking packages for software projects

    - by Avi
    I'm a developer working by myself. I'm looking for a computerized tool to manage my goals and activities. I own it Microsoft Project, but I don't like it. I've started many "projects" but could never keep on using it. Too complex and heavyweight for me. I use MS-Outlook tasks. They are not what I need. No planning capability. Tracking is not nice. I'm using the Pomodoro technique and I like it, but I'm looking for something more comprehensive and with better computerized support. Something that would allow me to define goals with dependencies and time estimation, keep daily prioritized lists etc. So, I'm looking for a solution. One I've found is GoalPro, but I uneasy because I could not find a cross-product "top ten" like review. Are you using any goal setting package such as GoalPro? Which? Does it help? Pros and Cons?

    Read the article

  • Linux Cups raspberry pi offload processing to server

    - by jaredmsaul
    I am interested in setting up a raspberry pi as the local end of a printing solution. In my testing the pi chokes on acting as a complete cups based print server. It seems a little underpowered for some of the ghostscipt processing and other filtering that occurs-- particularly on larger or complex documents the processing time can be 5 or more minutes. My question is can the processing be largely done elsewhere and the prepared end product of the processing chain be fed to the pi for output on the connected printer? So in this scenario any arbitrary document (html, pdf, text) is initially 'printed' on a relatively powerful machine but the output is stored in a file. This file is then grabbed by the pi and with all the heavy work out of the way easily printed using cups. I know files can be pushed through cups in raw mode but I am fuzzy on the pros and cons and the applicability in what I describe. I have tested this with pdftops creating a ps file then feeding that raw to cups and I think it works but it seems like there may be a cleaner solution. This scenario would ideally work for any number of printer types.

    Read the article

  • Mac dev folder missing, SSH not working

    - by SamGoody
    A few days ago, SSH stopped working. When I try logging in a get the following message: PTY allocation request failed on channel 0 stdin: is not a tty fatal: unrecognized command '' Connection to 74.52.61.194 closed. Web searches have shown me that there might be something wrong with /dev/std. But my computer lacks a /dev/ drive. There is an Alias to /dev/ [hidden, but I've revealed hidden files to do this search], but when I try to open it I am told that it cannot find the folder it is aliasing. Now, many a web search tells me that without a dev folder, the computer doesn't work, but it does seem to work, except the SSH. Also, are there any tools that can save my SSH preferences so that I don't have to, each time, type out the username@adrees, password, path all of which are long and complex? Not looking for a Filezilla type client, there are many of those. Looking for a command line like putty, that lets me use bash on the remote client. Am on Macbook Pro, latest version of Tiger.

    Read the article

  • Windows Explorer and UAC: run elevated

    - by syneticon-dj
    I am profoundly annoyed by UAC and switch it off for my admin user wherever I can. Yet, there are situations where I can't - especially if those are machines not under my continuous administration. In this case, I am always challenged with the task of traversing directories using my administrative user via the Windows Explorer where regular users do not have "read" permissions. The possible two approaches to this problem so far: change the ACLs to the directory in question to include my user (Windows conveniently offers the Continue button in the "You don't currently have permissions to access this folder" dialog. This obviously sucks since more often than not I do not want to change ACLs but just look into the folder's contents use an elevated cmd.exe prompt along with a bunch of command line utilities - this usually takes a lot of time when browsing through large and / or complex directory structures What I would love to see would be a way to run Windows Explorer in elevated mode. I have yet to find out how to do so. But other suggestions solving this problem in an unobtrusive way without changing the entire system's configuration (and preferably without the need for downloading / installing anything) are very welcome, too. I have seen this post with a suggestion for altering HKCR - interesting, but it changes the behavior for all users, which I am not allowed to do in most situations. Also, some folks have suggested using UNC paths to access the folders - unfortunately this does not work when accessing the same machine (i.e. \\localhost\c$\path) as the "Administrators" group membership is still stripped from the token and a re-authentication (and thus the creation of a new token) would not happen when accessing localhost.

    Read the article

  • Getting Started in SuSE as an Ubuntu User

    - by Subhamoy Sengupta
    I am not a Linux newbie, but haven't touched SuSE in a very very long time (last time I tried it, it was SuSE 7!). Finally now I felt like giving it a try, and many things seem strange or unnecessarily complex. I have a series of questions. How do I ensure that my packages are uptodate? It sounds silly, but I tried the obvious methods already. I have disabled the default repositories that show up when you do zypper lr, and added Tumbleweed and packman repositories (Essentials, Multimedia, Extra). Then I did a sudo zypper ref --force and then sudo zypper dup, and it tells me many dependencies are not met. I have already added solder.allowVendorChange=true to /etc/zypp/zypp.conf, so it should not care which repository the latest versions are in, and just upgrade to it. Even when I chose to skip the packages with unmet dependencies, and seemed like quite a bit happened in the background, I opened Firefox afterwards and the version was 7! I am guessing things did not go as expected. But of course this is not a problem with SuSE, but I am not understanding the system right. How do I do it right? When I start typing arguments of a command, for example sudo zypper install, when I type sudo zypper ins and keep hitting TAB, nothing happens! It always worked in Ubuntu and I feel very uneasy with this. Is this how SuSE is supposed to be? When I try to install something, and I start writing its name, even though the package exists and I am sure of it, hitting TAB does not autocomplete it. This is also quite inconvenient. Why is it not happening? There are many things in SuSE that are really great, and I think I will stay with it and not go back to Ubuntu once I settle these very rudimentary issues. But right now they are giving me a lot of grief! Please help!

    Read the article

  • Serving images from another hostname vs Apache overload for the rewrites

    - by luison
    We are trying to improve further the speed of some sites with older HTML in order as well to obtain better SEO results. We have now applied some minify measures, combined html, css etc. We use a small virtualized infrastructure and we've always wanted to use a light + standar http server configuration so the first one can serve images and static contents vs the other one php, rewrites, etc. We can easily do that now with a VM using the same files and conf of vhosts (bind mounts) on apache but with hardly any modules loaded. This means the light httpd will have smaller fingerprint that would allow us to serve more and quicker, have more minSpareServer running, etc. So, as browsers benefit from loading static content from different hostnames as well, we've thought about building a rewrite rule on our main server (main.com) to "redirect" all images and css *.jpg, *.gif, *.css etc to the same at say cdn.main.com thus the browser being able to have more connections. The question is, assuming we have a very complex rewrite ruleset already (we manually manipulate many old URLs for SEO) will it be worth? I mean will the additional load of main's apache to have to redirect main.com/image.jpg (I understand we'll have to do a 301) to cdn.main.com/image.jpg + then cdn.main.com having to serve it, be larger than the gain we would be archiving on the browser? Could the excess of 301s of all images on a page be penalized by google? How do large companies work this out, does the original code already include images linked from the cdn with absolute paths?

    Read the article

  • Choosing gateway router/firewall for small datacenter network [closed]

    - by rvs
    I'm choosing a gateway router/firewall for small internal network for medium-sized web service. Currently there are 5 servers in internal network, up to 50 http(s) requests/second, up to 1000 simultaneous connections, uplink is 100 Mbit. So, network is relatively small and not very busy and we don't like to buy some pricey monster like cisco or jupiper for this site. Instead we'd like to buy two affordable devices (one for spare), which can handle our workload now and some time in future (it might be up to 2x more in 1 year). I had some experience with Sonicwall NSA, but it seems to be too complex for this site (we don't need most of its features) and even too pricey when buying two of them. So, after some research I've come up with following options: Netgear Prosecure UTM Series (probably UTM25) Zyxel ZyWall Series (USG100 or USG200) Sonicwall TZ 210 Is this a good idea? All of the above seems to be more office products, not datacenter ones. Or we should stick with Sonicwall NSA? Does anyone have any hands-on experience with this models? Maybe some other advices? Thanks.

    Read the article

  • Best way to create and restore a Drive Image with Windows 7? [closed]

    - by jasondavis
    Possible Duplicate: Want to create a system image I am about to build a new PC. I am a windows 7 user. For years now I have been wanting to install windows and all my favorite software, music, etc., and then make a drive IMAGE and be able to go in 6 months later or WHENEVER I want to start fresh and completely format my drives and restore my IMAGE and have all my settings, programs, etc be just5 like when I created the original image. I know there is many ways to do this but I have never done this 100% successfully and I have about a week to figure out how to do it perfectly for when I build my new PC. I have heard good things about using tried Acronis true image in the PAST for doing what I describe4, I tried using it but, but the newer versions are overly complex and don't even seem to work the way I hoped. I also see that Windows 7 has some sort of drive IMAGE creator itself as well. Does the newer Windows 7 image creator do what I am describing above? If it does do what I am asking for (complete drive image with windows, all programs and settings) saved to an IMAGE file that can easily be restored to ANY hard drive in the future? Please share your experiences, tips, ideas on how to achieve this the easiest and most reliable way please

    Read the article

  • Best method to redirect internal DNS to external website?

    - by ProfessionalAmateur
    We host several web based applications outside of intranet. The URL's to these applications are long, complex and overall not user friendly. Ex: http://hostingsite:port/approot/folder/folder/login.aspx <-- (production) http://hostingsite:port22/approot/folder/folder/login.aspx <-- (dev) http://hostingsite:port33/approot/folder/folder/login.aspx <-- (test) I'd like to create an internal DNS entry to allow users to access these sites with ease. Ex: http://prod --> http://hostingsite:port/approot/folder/folder/login.aspx http://dev --> http://hostingsite:port22/approot/folder/folder/login.aspx I'm not familiar with the DNS process and setup, as far as I know a DNS can only be redirected to an IP, but not to subdomains for directory paths as described above? Is this a correct assumption? I am thinking for throwing up an internal webserver that will listen to the internal DNS entries and redirect to the external sites. http://prod --> [internal webserver] --> redirect --> http://hostingsite:port/approot/folder/folder/login.aspx Is there a better way to do this?

    Read the article

  • pcfg_openfile: unable to check htaccess file, ensure it is readable

    - by rxt
    After moving a website folder on my local development machine to another drive, then moving it back, I got a 403 error. Most of this problem had probably to do with rights that got messed up. After deleting the code and restoring it from SVN, the rights seemed allright. The error stayed however. The setup is a bit complex, as follows: I have Ubuntu 10.4 as development machine, trying to mimic the server as much as possible We use Eclipse + SVN and I create all projects in a local folder under my user account In /var/www-vhosts I create folders for each vhost, like this one: test.localhost test.local/index.php: includes the index file of the project test.local/.htaccess is a dynamic link to the htaccess file in a project subfolder I get the following error in the apache error log: [Thu Jul 08 15:55:56 2010] [crit] [client 127.0.0.1] (13)Permission denied: /var/www-vhosts/test.localhost/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable The problem seems to be the .htaccess file, or the link to it. When I empty the htaccess, nothing changes When I remove the link, the index-include produces some output (in the apache error log) When I remove the link and replace it with the actual file, I get another error: [Thu Jul 08 16:47:54 2010] [error] [client 127.0.0.1] Symbolic link not allowed or link target not accessible: /var/www-vhosts/test.localhost/test I'm lost here, don't know what to do next. Do you have any ideas what I can try? This setup has worked before, but I don't know what is different now.

    Read the article

  • Can we do a DNSSEC 101? [closed]

    - by PAStheLoD
    Please share your opinions, FAQs, HOWTOs, best practices (or links to the one you think is the best) and your fears and thoughts about the whole migration (or should I just call it a new piece of tech?). Is DNSSEC just for DNS providers (name server operators)? What ought John Doe to do, who hosts johndoe.com at some random provider (GoDaddy, DreamHost and such)? Also, what if the provider's name server doesn't do automatic signing magic, can John do it manually? In a fire-and-forget way, without touching KSKs and ZSKs rollovers and updating and headaches?) Does it bring any change regarding CERT records? Do browsers support it? How come it became so complex? Why didn't they just merged it with SSL? DKIM is pretty straightforward, IANA/IETF could've opted for something like that. (Yes I know that creating a trust anchor would be still problematic, but browsers are already full of CA certs. So, they could've just let anyone get a cert for a domain for shiny green padlocks, or just generate one for a poor blue lock, put it into a TXT record, encrypt the other records and let the parent zone sign the whole for you with its cert.) Thanks! And for disclosure (it seemed like the customary thing to do around here), I've asked the same on the netsec subreddit.

    Read the article

  • download and process a file by ftp at set intervals, with error handling, rescheduling and status messages

    - by compound eye
    I want to download a data file from a remote ftp server to my machine at regular intervals. Once the file is downloaded I want to call another script which will process the file. My development machine is mac os x, the eventual deployment environment is linux. What's would be the stock standard way to automate this? I know I can use cron to schedule curl to download and to run a script that will process the downloaded file at regular intervals, and I know could write a slightly more complex script or an application that would do this and add error handling, rescheduling and sending status emails. But one of my requirements for this project is to write as little custom code as possible, instead I should try to use standard, tried and true existing tools, and if I do have to write code, to try and write the most straightforward code possible. The reason for this is the code will potentially be installed on a large number of machines, all of which will need to be tweaked, customised and maintained by different people, long after I am gone from the project, so the intention is to use well documented, well supported tools as much as possible. This seems such a common task, there must be tools and scripts all over the internet, written by people who have carefully considered everything that could possibly go wrong when you need to download and process a file from a remote server at regular intervals, with error handling, rescheduling and sending status messages. Is that what Expect is for? What would you recommend? (the system will be downloading weather prediction data every six hours, so that the system can prepare in the event of bad weather warnings)

    Read the article

  • How to Set Up an SMTP Submission Server on Linux

    - by Kevin Cox
    I was trying to set up a mail server with no luck. I want it to accept mail from authenticated users only and deliver them. I want the users to be able to connect over the internet. Ideally the mail server wouldn't accept any incoming mail. Essentially I want it to accept messages on a receiving port and transfer them to the intended recipient out port 25. If anyone has some good links and guides that would be awesome. I am quite familiar with linux but have never played around with MTA's and am currently running debian 6. More Specific Problem! Sorry, that was general and postfix is complex. I am having trouble enabling the submission port with encryption and authentication. What Works: Sending mail from the local machine. (sendmail [email protected]). Ports are open. (25 and 587) Connecting to 587 appears to work, I get a "need to starttls" warning and starttls appears to work. But when I try to connect with the next command I get the error below. # openssl s_client -connect localhost:587 -starttls smtp CONNECTED(00000003) depth=0 /CN=localhost.localdomain verify error:num=18:self signed certificate verify return:1 depth=0 /CN=localhost.localdomain verify return:1 --- Certificate chain 0 s:/CN=localhost.localdomain i:/CN=localhost.localdomain --- Server certificate -----BEGIN CERTIFICATE----- MIICvDCCAaQCCQCYHnCzLRUoMTANBgkqhkiG9w0BAQUFADAgMR4wHAYDVQQDExVs b2NhbGhvc3QubG9jYWxkb21haW4wHhcNMTIwMjE3MTMxOTA1WhcNMjIwMjE0MTMx OTA1WjAgMR4wHAYDVQQDExVsb2NhbGhvc3QubG9jYWxkb21haW4wggEiMA0GCSqG SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDEFA/S6VhJihP6OGYrhEtL+SchWxPZGbgb VkgNJ6xK2dhR7hZXKcDtNddL3uf1YYWF76efS5oJPPjLb33NbHBb9imuD8PoynXN isz1oQEbzPE/07VC4srbsNIN92lldbRruDfjDrAbC/H+FBSUA2ImHvzc3xhIjdsb AbHasG1XBm8SkYULVedaD7I7YbnloCx0sTQgCM0Vjx29TXxPrpkcl6usjcQfZHqY ozg8X48Xm7F9CDip35Q+WwfZ6AcEkq9rJUOoZWrLWVcKusuYPCtUb6MdsZEH13IQ rA0+x8fUI3S0fW5xWWG0b4c5IxuM+eXz05DvB7mLyd+2+RwDAx2LAgMBAAEwDQYJ KoZIhvcNAQEFBQADggEBAAj1ib4lX28FhYdWv/RsHoGGFqf933SDipffBPM6Wlr0 jUn7wler7ilP65WVlTxDW+8PhdBmOrLUr0DO470AAS5uUOjdsPgGO+7VE/4/BN+/ naXVDzIcwyaiLbODIdG2s363V7gzibIuKUqOJ7oRLkwtxubt4D0CQN/7GNFY8cL2 in6FrYGDMNY+ve1tqPkukqQnes3DCeEo0+2KMGuwaJRQK3Es9WHotyrjrecPY170 dhDiLz4XaHU7xZwArAhMq/fay87liHvXR860tWq30oSb5DHQf4EloCQK4eJZQtFT B3xUDu7eFuCeXxjm4294YIPoWl5pbrP9vzLYAH+8ufE= -----END CERTIFICATE----- subject=/CN=localhost.localdomain issuer=/CN=localhost.localdomain --- No client certificate CA names sent --- SSL handshake has read 1605 bytes and written 354 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: E07926641A5EF22B15EB1D0E03FFF75588AB6464702CF4DC2166FFDAC1CA73E2 Session-ID-ctx: Master-Key: 454E8D5D40380DB3A73336775D6911B3DA289E4A1C9587DDC168EC09C2C3457CB30321E44CAD6AE65A66BAE9F33959A9 Key-Arg : None Start Time: 1349059796 Timeout : 300 (sec) Verify return code: 18 (self signed certificate) --- 250 DSN read:errno=0 If I try to connect from evolution I get the following error: The reported error was "HELO command failed: TCP connection reset by peer".

    Read the article

  • Is it possible to record a screen-video from a VNC server?

    - by nikie
    I have a computer that's running VNC server. I would like to record a video of what's going on on this computer, if possible without installing additional software on that computer. Is there a program that can connect to the VNC server port and instead of displaying the screen save it to an (e.g. AVI) video file? Background: One of our customers sometimes has problems with the software he bought from us when he's performing a complex procedure. To help him, we offered that someone (a service technician or programmer) watches what he's doing during that procedure to find out if he's doing something wrong or if there's a bug in the software. Currently, this is done live via VNC. That has a few disadvantages: The service technician has to be in the office at the time. As the customers are in different time zones, that can be in the middle of the night. If the service technician forgets something or doesn't notice something, it's lost. There's no way to see what happened again. Only a single computer can be watched by one service technician at a time. I know I could install normal screen-grab software on the computer, but we're talking about an embedded system with limited RAM, CPU, HDD space, so installing something new is not an easy decision. And VNC is already there. I could of course open a VNC client on some office PC and capture that PC's screen, but I can only record one remote computer that way. I often have to watch up to 8 screens in parallel. (And I don't think that screen-grabbing VNC would improve image quality, either.)

    Read the article

  • What user information is exposed via a browser?

    - by ipso
    Is there a function or website that can collect and display ALL of the user information that can be obtained via a browser? Background: This of course does not account for the significant cross-reference abilities of large corporations to collate multiple sources and signals from users across various properties, but it's a first step. Ghostery is just a great idea; to show people all of the surreptitious scripts that run on any given website. But what information is available – what is the total set of values stored – that those scripts can collect from? If you login to a search engine and stay logged in but leave their tab, is that company still collecting your webpage viewing and activity from other tabs? Can past or future inputs to pages be captured – say comments on another website? What types of activities are stored as variables in the browser app that can be collected? This is surely a highly complex question, given to countless user scenarios – but my whole point is to be able to cut through all that – and just show the total set of data available at any given point in time. Then you can A/B test and see what is available with in a fresh session with one tab open vs. the same webpage but with 12 tabs open, and a full day of history to boot. (Latest Firefox & Chrome – on Win7, Win8 or Mint13 – although I'd like to think that won't make too much of a difference. Make assumptions. Simple is better.)

    Read the article

  • wireless to wired internet

    - by Mark T
    My wife uses an old computer which doesn't really like having an internet connection via a USB wireless adapter. I've tried several, and none have been satisfactory. The connection gets dropped often and it is frustrating for her. A wireless card also had the same trouble. Repositioning the computer made no difference. (Two laptops, nearby, connect just fine and stay up, so it isn't the router.) However, the computer always worked well when I had an Ethernet cable connected to it. I know there is a box which will connect to my wireless network and provide an Ethernet cable connection. But the terminology used is so complex, I can't tell what it is I really need. In case that wasn't clear, here it is in different words: What I need is just the opposite of a wireless router. My wireless router takes my cable modem's Ethernet connection and makes it available to wireless clients. What I want is a box which is a wireless client to my wireless router and provides an Ethernet cable connection that I can connect to any device. I need to know the right name for a box with such capabilities. If you know of some inexpensive examples, that would also be helpful. I'm running a wireless G network with a Linksys Router.

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >