Search Results

Search found 10078 results on 404 pages for 'bad man'.

Page 19/404 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • What's the best project management software for internal dev. 5 man shop

    - by P.Brian.Mackey
    I work for a large corporation, but we do small intranet web application development. Our project management tracking sucks. Its custom software built by a jr. intern. For what its worth, our development style is akin to agile, but there's nothing set in stone...very customer oriented approach. I need project tracking that meets the criteria: Intranet, internal products. Mostly maintenance, some new development. 5 developers 12 products 1 hands-off manager. He really just wants to know estimated man hours, due date for dev, QA and release. Along with a short description of the project. Free or super cheap. Bonus Simple pretty UI. Think pretty charts. Hope I covered everything. Please ask for any clarification. If you read dreaming in code, the company uses some project tracking software that sounds pretty sweet. Note, we do have Team Foundation Server. I already tried pushing its use as PM tracking, but its too complicated. I can't get people to sit and train. So this software has to be easy.

    Read the article

  • How does a one-man developer do its games' sounds?

    - by Gustavo Maciel
    Before anything, that's not a "oh, where can I find resources?" question. Well, I've been curious about one thing in the indie games industry. For the development of the game, such tasks like game design, art, sketches, code programming and etc can be easily done by just one person. You can just take up a paper and pencil and you're a game designer. You can just take software like Photoshop or Paint and you're an artist, a scanner and you're a sketcher, a compiler and you're a programmer. For sound it's different. You may tell me: Well, follow the line, take a lot of instruments and record it. But all we know that things don't work this way. I can list up some changes for us: External noises are a big problem, sound effects can't be made with instruments, it can't sound like a recorded and clipped sound. Well I can imagine how they do this in large companies, with such big studios and etc. But to summarise, my question is: What's the best way for a one-man indie to do all its sound? Does he have to synthesize everything? Record and buy some crazy program for editing sounds?

    Read the article

  • Museum of Modern Art Starts Video Game Collection; Acquires Myst, Pac-Man, and More

    - by Jason Fitzpatrick
    The Museum of Modern Art is weighing in on the video-games-as-art debate by starting a collection of iconic video games and putting them up for public display. Read on to see what games are included in the initial batch and the MoMA’s reasons behind starting a video game collection. Although the MoMA is slated to grow to over 40 titles, the seed batch is 14 titles including: Pac-Man, Tetris, Sim City 2000, Myst, Portal, and Dwarf Fortress. In the announcement they explain the motivation for building a video game collection: Are video games art? They sure are, but they are also design, and a design approach is what we chose for this new foray into this universe. The games are selected as outstanding examples of interaction design—a field that MoMA has already explored and collected extensively, and one of the most important and oft-discussed expressions of contemporary design creativity. Our criteria, therefore, emphasize not only the visual quality and aesthetic experience of each game, but also the many other aspects—from the elegance of the code to the design of the player’s behavior—that pertain to interaction design. In order to develop an even stronger curatorial stance, over the past year and a half we have sought the advice of scholars, digital conservation and legal experts, historians, and critics, all of whom helped us refine not only the criteria and the wish list, but also the issues of acquisition, display, and conservation of digital artifacts that are made even more complex by the games’ interactive nature. This acquisition allows the Museum to study, preserve, and exhibit video games as part of its Architecture and Design collection. The above quote is only a small snippet of a much lengthier look at the benefits of examining and preserving video games, hit up the link below to check out the full post including future titles the MoMA would like to include in their archive. Video Games: 14 in the Collection, for Starters [Inside/Out] How To Boot Your Android Phone or Tablet Into Safe Mode HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices

    Read the article

  • One man software developer product success stories? [on hold]

    - by EugeneKr
    I've got a bad feeling that this question is not appropriate here.. Hopefully you can point me to the right place to ask such a thing (not google though, been there). I want to create my own product, but for some reason have no ideas, so decided to see what people have already done. I would like to start by myself too. I don't mind expanding, but at later stages when it is absolutely necessary. Anyway, to give you an example. There is a guy who created bingo card generation software, then somebody made a wedding planner software and they seem to be doing pretty fine. I would like to know more such cases to draw inspiration from. Do you know such people or maybe you are one of them? Also, if there are places on the net where they dwell, don't hesitate to tell me :) Thanks!

    Read the article

  • DateTime::Astro::Sunrise is giving bad values

    - by BozoJoe
    Using the example code included in the man page for DateTime::Astro::Sunrise I'm getting back ~14:00 for the sunrise and ~2:00 for the sunset. my machine's time and timezone are set correctly (AFAIK) Am I reading something wrong? 2am and 2pm are just so brutually wrong.

    Read the article

  • Nginx+Passenger: 502 Bad Gateway from Nginx when passing urlencoded URLs in GET vars

    - by jimeh
    Here's an example of the URLs that don't work: http://domain/do?url=http%3A%2F%2Fwww.linkedin.com%2Fin%2Fperson http://domain/do?url=http%3A%2F%2Fwww.linkedin.com%2F However, the following URL does work: http://domain/do?url=http%3A%2F%2Fwww.linkedin.com Also, this only happens with Nginx, using Passenger with Apache it works fine, but we use Nginx on our production machines. Here's the entry in Nginx's error log: 2009/12/01 09:30:51 [error] 6407#0: *136 upstream prematurely closed connection while reading response header from upstream, client: xxx.xxx.xxx.xxx, server: domain, request: "GET /do?url=http%3A%2F%2Fwww.linkedin.com%2F HTTP/1.1", upstream: "passenger://unix:/tmp/passenger.6335/master/helper_server.sock:", host: "domain"

    Read the article

  • Setting up a transparent SSL proxy

    - by badunk
    I've got a linux box set up with 2 network cards to inspect traffic going through port 80. One card is used to go out to the internet, the other one is hooked up to a networking switch. The point is to be able to inspect all HTTP and HTTPS traffic on devices hooked up to that switch for debugging purposes. I've written the following rules for iptables: nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.2.1:1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337 -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE On 192.168.2.1:1337, I've got a transparent http proxy using Charles (http://www.charlesproxy.com/) for recording. Everything's fine for port 80, but when I add similar rules for port 443 (SSL) pointing to port 1337, I get an error about invalid message through Charles. I've used SSL proxying on the same computer before with Charles (http://www.charlesproxy.com/documentation/proxying/ssl-proxying/), but have been unsuccessful with doing it transparently for some reason. Some resources I've googled say its not possible - I'm willing to accept that as an answer if someone can explain why. As a note, I have full access to the described set up including all the clients hooked up to the subnet - so I can accept self-signed certs by Charles. The solution doesn't have to be Charles-specific since in theory, any transparent proxy will do. Thanks! Edit: After playing with it a little, I was able to get it working for a specific host. When I modify my iptables to the following (and open 1338 in charles for reverse proxy): nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.2.1:1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337 -A PREROUTING -i eth1 -p tcp -m tcp --dport 443 -j DNAT --to-destination 192.168.2.1:1338 -A PREROUTING -i eth1 -p tcp -m tcp --dport 443 -j REDIRECT --to-ports 1338 -A POSTROUTING -s 192.168.2.0/24 -o eth0 -j MASQUERADE I am able to get a response, but with no destination host. In the reverse proxy, if I just specify that everything from 1338 goes to a specific host that I wanted to hit, it performs the hand shake properly and I can turn on SSL proxying to inspect the communication. The setup is less than ideal because I don't want to assume everything from 1338 goes to that host - any idea why the destination host is being stripped? Thanks again

    Read the article

  • Getting 400 Bad Request when requesting by server name on nginx/uwsgi

    - by Marc Hughes
    I'm trying to run 2 different sites on nginx via different ports (they each have a load balancer that points to the appropriate port). The first site work perfectly. The second site... If I access http://localhost:81/ it works correctly If I access http://127.0.01:81/ it works correctly If I access the hostname http://THEHOSTNAME:81/ it fails with a 400 error If I access the public IP http://x.x.x.x:81/ it fails with a 400 error I've set the error_log to info, but the only lines I get in the log when this happens is: ==> /var/log/nginx/access.log <== 10.183.38.141 - - [24/Aug/2014:21:03:28 +0000] "GET / HTTP/1.1" 400 37 "-" "curl/7.36.0" "-" ==> /var/log/nginx/error.log <== 2014/08/24 21:03:28 [info] 7029#0: *5 client 10.183.38.141 closed keepalive connection In my uwsgi log, I only see this: [pid: 6870|app: 0|req: 87/92] 10.28.23.224 () {32 vars in 380 bytes} [Sun Aug 24 21:05:21 2014] GET / => generated 26 bytes in 1 msecs (HTTP/1.1 400) 2 headers in 82 bytes (1 switches on core 2) What should be my next step in debugging this?

    Read the article

  • memcached append() php ubuntu - bad protocol

    - by awongh
    I am running ubuntu gutsy(7.1) , php5 and I am trying to get memcached running locally. I installed everything as per the docs: memcached daemon, php PECL extension, libevent, etc. But now I can only run half of the example script for memcached append(): <?php $m = new Memcached(); $m->addServer('localhost', 11211); $m->setOption(Memcached::OPT_COMPRESSION, false); $m->set('foo', 'abc'); $m->append('foo', 'def'); var_dump($m->get('foo')); ?> the script terminates @ append() with an RES_BAD_PROTOCOL error message. It still runs the get(). I don't know why memcached would otherwise be working fine (connect, set, get - with the correct value of 'abc') and not work for append. it also doesnt work with prepend. I believe I have the setup correct, but I am not sure. Maybe there are compatibility problems between the versions of the dependecies? thanks much

    Read the article

  • Bad response from freeSSHd server.

    - by Kirill
    I'm using ssh client called Granados to connect to servers. When I use CopSSH as ssh server, everything works fine, but when I use freeSSHd as ssh server I get strange response from server that contains something like that: "[4;41H [4;49H [4;42H [4;49H [4;43H [4;49H [4;44H [4;49H [4;45H [4;49H [4;46H [4;49H [4;47H [4;49H [4;48H [4;49H [4;1HC:\Users\Administrator\Desktopcat /proc/meminfot [4;52H [4;50H [4;1HC" Does anybody know what does this symbols means? Thanks.

    Read the article

  • I get a 502 bad gateway ONLY with a specific combination of domain/root folders - NGINX

    - by Patrick De Amorim
    I have a VPS running NGINX and virtual hosts, with a configuration such as this: Domains directing to it: lolpics.no smscloud.no idmag.no Root folders: /home/vds/www/lolpics /home/vds/www/smscloud /home/vds/www/idmag SMSCloud.no is the site that keeps getting 502 errors, but if I make the domain direct to any of the other folders, the site works, or if I make any other domain name direct to the /home/vds/www/smscloud folder, it works. Only smscloud.no with /home/vds/www/smscloud breaks I tried putting this between the http{} in my nginx.conf and no help: proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; EDIT: Well, that was slightly silly, if anyone from Google stumbles on this, here's how I fixed it, I just added this to the http{}: fastcgi_buffer_size 16k; fastcgi_buffers 16 16k; So that the start of my http block is: http { include /etc/nginx/mime.types; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; fastcgi_buffer_size 16k; fastcgi_buffers 16 16k;

    Read the article

  • Bad Performance when SQL Server hits 99% Memory Usage

    - by user15863
    I've got a server that reports 8 GB of ram used up at 99%. When restart Sql Server, it drops down to about 5% usage, but gradually builds back up to 99% over about 2 hours. When I look at the sqlserver process, its reported as only using 100k ram, and generally never goes up or below that number by very much. In fact, if I add up all the processes in my TaskManager, it's barely scratching the surface of my total available (yet TaskManager still shows 99% memory usage with "All processes shown"). It appears that Sql Server has a huge memory leak going on but it's not reporting it. The server has ran fine for nearly two years, with this only starting to manifest itself in the last 3-4 weeks. Anyone seen this or have any insight into the problem? EDIT When the server hits 99%, performance goes down hill. All queries to the server, apps, etc. come to a crawl. Restarting the service makes things zippy again, until 2 hours has passed and the server hits 99% once again.

    Read the article

  • monitoring a /21 for potential bad guys with snort and port mirroring

    - by Adeodatus
    Hi all, I want/need to start monitoring our network a bit better. Its an odd network in that it comprises 2 /22 public IPs and a slew of private admin IPs. I do have one point in the network where it all comes together and I can turn on port mirroring on the catalyst. From that port, I'd like to turn up a box running various utilities. Snort is high on my list but it'd be nice to also get some networking statistics with something like Netflow. So, what are peoeple's thoughts. I can turn up a box needed for this with a bit of ease. We have the hardware available. What should I run? I'd love to know what kind of nasty things are potentially going on but I'd also like to see statistics on what people are doing on the network so I can better tweak our systems to handle it better and improve performance. I'm open so please, give me some ideas to go along with what I've got.

    Read the article

  • Restarting nginx with Capistrano results in 502 Bad Gateway

    - by blee
    Here's what cap deploy does: sudo -p 'sudo password: ' -u root /var/rails_apps/fooapp/current/script/process/reaper reaper simply contains /etc/init.d/nginx restart When I run the same command from the shell, I do not get a 502--everything is fine. The nginx error.log is empty. Any thoughts on how to troubleshoot? Thanks in advance for your thoughts.

    Read the article

  • What program should I use for SSL stripping and re-encrypting

    - by Sparksis
    I'm trying to strip a HTTP over SSL connection down to SSL and then re-encrypt the channel (with a signed certificate(s) I can provide). Of course I want to be able to store captures of all the un-encrypted data. The purpose of this is to reverse engineer a HTTP handshake that is used by a SIP program on my machine. I've tried SSLstrip but it doesn't support what I need it too. Edit: I want something to this effect https://github.com/applidium/Cracking-Siri/blob/master/tcpProxy.rb only more generic and able to write to a pcap stream that wireshark will understand (I'm not sure if this does that). Edit2: upon further inspection this does not create pcap streams. I guess if need be I can write a compatible version but that is not the desired choice.

    Read the article

  • Bad I/O scheduler?

    - by user62367
    os: up-to-date Fedora 14. Working as a "normal desktop". It's doing very well, but if i start VirtualBox, and e.g.: install a guest on it, it just "freezez". I mean if there are disk activities on a VirtualBox guest, then the computer becomes unrespondable..even the mouse is laggin for about 50 minutes.. What could be the bottleneck? What could be the problem? If anyone has any tips/howtos to speed it up, please help! It has a normal 2,5" HDD, with 5400 RPM. Does it worth for me if i buy a 2,5" HDD with 7200 RPM? T7200 cpu, 4 GByte RAM, "vm.swappiness = 0". Thank you!

    Read the article

  • New CAT5 cable run is unstable - bad jacks? Bad cable?

    - by BeemerGuy
    This is a little project I'm doing at home. I wanted to wire two rooms together (basically, the router is one room, and the switch is in the second room). So I ran a CAT5 between the two rooms, and wired an RJ45 jack in each room. I then hooked up the two jacks with two CAT5 cable to run it through the cable tester, and all 8 wires seem good. Now, when I connect the switch and the router, the connection is unstable -- I ping the router and it barely holds on for two pings before it disconnects, and stays in that unstable state. Just to make sure the router and the switch are ok, I connected them with long wire between the two rooms and the connection is absolutely stable, and pings continuously. What could be the cause for the unstable connection? Especially that it pings a few times, so there IS a connection. But why is it unstable? And how come the cable tester says it's ok, but it's unstable?

    Read the article

  • Open Source project that does SSL Inspection

    - by specs
    I've been assigned to research out and spec replacing our old and decrepit http content filtering system. There are several open source filtering packages available but I've not come across one that does SSL inspection. The new system will scale to many branches of different sizes, from say 10 users to a few hundred, so purchasing an appliance for each branch isn't desirable. When we're further along, we will do custom programming as we have a few unique needs in other aspects of filtering, so if the suggestion takes a bit of customization, it won't be a problem.

    Read the article

  • nginx 502 Bad Gateway on every external site

    - by Leandros
    I just installed nginx and followed the guides on the official site, to set it up with php5-fpm, but it just won't work. Not even the default site, without php is working outside of my server. Tried listen = 127.0.0.1:7777 and listen = /var/run/php5-fpm.sock Both don't work. I can access http://localhost with lynx on my server, but not from somewhere else (with external ip obviously). Yes, the php5-fpm deamons are running, yes the port (80 and 7777) is opened. Don't work with php-cgi as well. My config: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; proxy_buffers 16 16k; proxy_buffer_size 32k; fastcgi_buffers 16 16k; fastcgi_buffer_size 32k; fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; } Server config: (symlinked to sites-enabled) server { server_name skilloverflow.de *.skilloverflow.de; root /var/www/blog.skilloverflow.de/htdocs; index index.php; error_log /var/log/nginx/skilloverflow.error.log; access_log /var/log/nginx/skilloverflow.access.log; location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location / { # This is cool because no php is touched for static content. # include the "?$args" part so non-default permalinks doesn't break when using query string try_files $uri $uri/ /index.php?$args; } location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; if (!-f $document_root$fastcgi_script_name) { return 404; } fastcgi_pass 127.0.0.1:7777; fastcgi_index index.php; include fastcgi_params; } location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires max; log_not_found off; } # deny access to apache .htaccess files location ~ /\.ht { deny all; } # deny access to apache .htaccess files location ~ /\.ht { deny all; } } PHP Version: 5.4.17-1 nginx version: 1.2.1 Debian 6.0.7 Linux 2.6.32 Edit: Lighttpd is still installed, does that matter? It's not running though. Edit 2: No error or access log is generated. They're all empty.

    Read the article

  • Bad results converting PDF to EPS on Linux

    - by Tim
    I'm having some trouble converting PDFs (created by Adobe Illustrator on a Mac) to EPS. I have tried several things but I am wondering if there is a better option. The following list is ordered by decreasing quality: inkscape --export-area-page --export-eps=out.eps in.pdf using the graphical program Inkscape works best, but is a bit slow; pdftops -eps in.pdf out.eps uses Poppler and works good and is fast; pdf2ps in.pdf out.eps uses ghostscript and works ok for simple documents; convert in.pdf out.eps uses ImageMagick and always rasterizes the image. I haven't tested the following: acroread -toPostScript use acroread (Linux only) Some issues I've found: Transparency is not supported in EPS, but instead of flattening the layers, most programs rasterize the image producing big files and ugly graphs. Inkscape does this best by only rasterizing the unsupported area. Gradients are rendered properly by Inkscape, but Poppler somehow chops up the gradient into many shapes of different colors. Greek symbols are seemingly not supported by Ghostscript and are rasterized (using pdf2ps). What are your experiences for this kind of task? Did I forgot certain programs and/or command line options that improve quality? I found some posts on this, but not a (thorough) comparison of possibilities, please correct me if I'm wrong. Related posts How to convert PDF to EPS? on TeX

    Read the article

  • bash completion processing gone bad, how to debug?

    - by msw
    It all started with a simple alias gv='gvim --remote-quiet' and now gv Space Tab gives nothing where it normally should give filenames. Oddly, alias gvi='gvim --remote-quiet' works as expected. I clearly have a workaround, but I'd like to know what is catching my gv for special processing. compopt is no help as gv shares the same settings as ls which does filename completion correctly. $compopt gv compopt +o bashdefault +o default +o dirnames -o filenames +o nospace +o plusdirs gv $ compopt ls compopt +o bashdefault +o default +o dirnames -o filenames +o nospace +o plusdirs ls The complete command is slightly more helpful but it doesn't tell me why my two characters got singled out for alteration: $ complete -p gv complete -o filenames -F _filedir_xspec gv $ complete -p ls complete -o filenames -F _longopt ls $ complete -p echo bash: complete: echo: no completion specification $ alias gvi='gvim --remote-silent' msw@tallguy:~/.gnupg$ complete -p gvi bash: complete: gvi: no completion specification Where did complete -o filenames -F _filedir_xpec gv come from?

    Read the article

  • Mustek 1200 CP driver SFC4.SYS bluescreens with BAD_POOL_HEADER

    - by Slink84
    I have Windows XP SP2. Recently it started bluescreening right after starting up with 'BAD_POOL_HEADER', 0x00000019 error caused by SFC4.SYS driver. After googling for a while I've found out that this is my Mustek's 1200 CP scanner driver. Booting in safe mode and uninstalling it solved the problem... And created another one: now I can't use my scanner. The weird thing is, that it has been working for a while on this PC without any problems. It all started suddenly, and I can't remember installing anything that might have affected it. Reverting to several earlier system restore points didn't help. I've tried re-installing it from the Mustek website, just in case if my copy got corrupted or infected by a virus, but it did not help - it still bluescreens. Also, I've installed Avast and scanned my PC - there were no viruses found. If anyone had such a problem before or has an idea what might have caused it, please help. ED: @Michael Todd: ...try installing on another PC... I've installed it on my friends PC. He has the same OS version, with the latest updates just like mine (he wasn't too happy, even after I've assured him that it is easy to fix by uninstalling that driver :] ). It worked fine - no bluescreens or whatsoever. So I think I've narrowed it down to either BIOS settings, or some wicked driver conflict. Next thing I'm going to try is to re-install XP, or install windows 7. I'm not too happy with a prospect of mucking about with BIOS settings...

    Read the article

  • Has the hardware in my modem gone bad?

    - by Tyler Scott
    I contacted CenturyLink about my modem recently and received useless and unrelated information. The problem seems to be that the modem will no longer save settings, the web interface is unusable except in internet explorer for some reason, and the modem keeps resetting. CenturyLink claimed it had to do with signal strength but I checked and it is currently between good and outstanding according to this. All of the lights remain green even when it starts acting up and I lose internet and shortly before it crashes and reboots. Does anyone have any idea what is going on or what I can do to fix it? (Asking CenturyLink again is obviously not going to help.) Update 1: Accessing the syslog from the web interface causes a crash. After it reboots, the log looks like as follows: 01/01/1970 12:01:29 AM Ethernet Ethernet client connected ,ip(192.168.0.2), mac(1c:6f:65:4c:6d:3b) 01/01/1970 12:01:38 AM Wireless 802.11 client connected ,ip(192.168.0.18), mac(d0:df:c7:c2:73:ca) 01/01/1970 12:01:41 AM System Event Line 0: VDSL2 link up, Bearer 0, us=20128, ds=40127 01/01/1970 12:01:43 AM dhcp6s[2028] dhcp6_ctl_authinit: failed to open /etc/dhcp6sctlkey: No such file or directory 01/01/1970 12:01:50 AM dhcp6s[2469] dhcp6_ctl_authinit: failed to open /etc/dhcp6sctlkey: No such file or directory 01/01/1970 12:01:52 AM radvd[2306] poll error: Interrupted system call 01/01/1970 12:01:56 AM PPP Link PPP server detected. 01/01/1970 12:01:56 AM PPP Link PPP session established. 01/01/1970 12:01:56 AM PPP Link PPP LCP UP. 01/01/1970 12:01:56 AM System Event Received valid IP address from server. Connection UP. 06/05/2014 08:16:01 AM radvd[2511] poll error: Interrupted system call 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:03 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:04 AM System Event Dead loop on virtual device tun6rd, fix it urgently! 06/05/2014 08:16:04 AM dhcp6s[3236] dhcp6_ctl_authinit: failed to open /etc/dhcp6sctlkey: No such file or directory 06/05/2014 08:16:08 AM Wireless 802.11 client connected ,ip(192.168.0.7), mac(44:6d:57:c4:d7:08) I also get it to crash on various other pages. I am guessing the web server is unstable.

    Read the article

  • public key infrastructure - distribute bad root certificates

    - by iamrohitbanga
    Suppose a hacker launches a new Linux distro with firefox provided with it. Now a browser contains the certificates of the root certification authorities of PKI. Because firefox is a free browser anyone can package it with fake root certificates. Can this be used to authenticate some websites. How? Many existing linux distros are mirrored by people. They can easily package software containing certificates that can lead to such attacks. Is the above possible? Has such an attack taken place before?

    Read the article

  • Bad resolution from Displayport converter on third of three screens

    - by Carl
    I am currently using three screens at the same time with my ATI 5770 & an active Displayport converter. The thing is that the third screen (the one using the active Displayport converter) is showing terrible resolution compared to my other two screens. The third screen is a Samsung Syncmaster P23. Two of my screens have a max resolution of 1920x1080, meanwhile the third on is only capable of 1600x1200. Do any of you know a solution to this problem?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >