Search Results

Search found 4691 results on 188 pages for 'ran biron'.

Page 114/188 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Parallel Port Problem in 12.04

    - by Frank Oberle
    I have a “dumb” printer attached to a parallel port in my machine which works fine under the “other” resident operating system (from Redmond) on the same machine. I recently added Ubuntu 12.04 as a dual boot on the machine, but Ubuntu doesn't seem to recognize the parallel port at all. All I need to set up a printer is a really plain-vanilla fixed pitch text-only generic driver, which is present, but no parallel ports show up. (The other printers, all on USB ports, seem to work just fine). Following what appeared to me to be the most reasonable of the many conflicting pieces of advice on the web, here's what I did: I added the following lines to /etc/modules parport_pc ppdev parport Then, after rebooting, I checked to see that the lines were still present, and they were. I ran dmesg | grep par and got the following references in the output that seemed like they might have to do with the parallel port: [ 14.169511] parport_pc 0000:03:07.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [ 14.169516] PCI parallel port detected: 9710:9805, I/O at 0xce00(0xcd00), IRQ 21 [ 14.169577] parport0: PC-style at 0xce00 (0xcd00), irq 21, using FIFO [PCSPP,TRISTATE,COMPAT,ECP] [ 14.354254] lp0: using parport0 (interrupt-driven). [ 14.571358] ppdev: user-space parallel port driver [ 16.588304] type=1400 audit(1347226670.386:5): apparmor="STATUS" operation="profile_load" name="/usr/lib/cups/backend/cups-pdf" pid=964 comm="apparmor_parser" [ 16.588756] type=1400 audit(1347226670.386:6): apparmor="STATUS" operation="profile_load" name="/usr/sbin/cupsd" pid=964 comm="apparmor_parser" [ 16.673679] type=1400 audit(1347226670.470:7): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=1010 comm="apparmor_parser" [ 16.675252] type=1400 audit(1347226670.470:8): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=1014 comm="apparmor_parser" [ 16.675716] type=1400 audit(1347226670.470:9): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=1014 comm="apparmor_parser" [ 16.676636] type=1400 audit(1347226670.474:10): apparmor="STATUS" operation="profile_replace" name="/usr/lib/cups/backend/cups-pdf" pid=1015 comm="apparmor_parser" [ 16.677124] type=1400 audit(1347226670.474:11): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/cupsd" pid=1015 comm="apparmor_parser" [ 1545.725328] parport0: ppdev0 forgot to release port I have no idea what any of that means, but the line “parport0: ppdev0 forgot to release port ” seems unusual. I was still unable to add a printer for my old clunker, so I tried the direct approach, typing echo “Hello” > /dev/lp0 and received a Permission denied message. I then tried echo “Hello” > /dev/parport0 which didn't give me any message at all, but still didn't print anything. Running the command sudo /usr/lib/cups/backend/parallel gives the following: direct parallel:/dev/lp0 "unknown" "LPT #1" "" "" Checking the permissions for /dev/parport0, Owner, Group, and Other are all set to read and write. crw-rw---- 1 root lp 6, 0 Sep 9 16:37 /dev/lp0 crw-rw-rw- 1 root lp 99, 0 Sep 9 16:37 /dev/parport0 The output of the command lpinfo -v includes the following line: direct parallel:/dev/lp0 I've read several web postings that seem to suggest this has been a problem for several years, but the bug reports were closed because there wasn't enough information to address the issue (shades of Microsoft!). Any suggestions as to what I might be missing here?

    Read the article

  • Cron job running successfully suddenly reports script is not found

    - by Ted B
    What might cause cron to suddenly report a file it is supposed to run is "not found," when the file hasn't been touched, and in fact, the entire system hasn't been touched since it last ran successfully? I have a cronjob schedule I define by doing sudo crontab -e In it, I have dozens of cron jobs that run successfully.. I do not have a PATH specified, and I use absolute paths to call all my scheduled scripts, setting the PATH in them as needed. I do not specify a SHELL in the crontab. All scripts identify the shell as their first line. Without me touching the system, a particular job defined in the middle of other jobs will suddenly stop running. To debug this, I added an output redirection to a log file. In that, the output clearly shows the output of the script successfully running time after time for weeks, and then suddenly the following appears: /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found If I do the ls command, copying and pasting that exact path from the error message, it clearly reports the file is still there (no surprise). Yet the log continues to report that file is "not found" until I take action. I can run the script manually and it runs just fine. If I do sudo crontab -e and save the file, the job runs on the next scheduled time, putting its output in the log, no longer reporting the script is "not found". It seems to me the contents of the script trying to be run are irrelevant since cron doesn't even process the file because it is "not found". I have a job scheduled below the one that is encountering this problem that I know is continuing to run, because I have its output mailed to me. So I know cron is running and continues to run at least one other job, even after it suddenly reports this job's script is "not found". All my lines end with a newline. I had no periods in the crontab until I added the redirection to a log file. I have now added a PATH specification, but left the absolute paths in the jobs. Unfortunately, I have no idea if and when this problem will occur. It will likely be weeks from now. By the way, I am running a script to syncronize the clock, and I see the time is exactly what it should be.

    Read the article

  • SQL Azure Security: DoS Part II

    - by Herve Roggero
    Ah!  When you shoot yourself in the foot... a few times... it hurts! That's what I did on Sunday, to learn more about the behavior of the SQL Azure Denial Of Service prevention feature. This article is a short follow up to my last post on this feature. In this post, I will outline some of the lessons learned that were the result of testing the behavior of SQL Azure from two machines. From the standpoint of SQL Azure, they look like one machine since they are behind a NAT. All logins affected The first thing to note is that all the logins are affected. If you lock yourself out to a specific database, none of the logins will work on that database. In fact the database size becomes "--" in the SQL Azure Portal.   Less than 100 sessions I was able to see 50+ sessions being made in SQL Azure (by looking at sys.dm_exec_sessions) before being locked out. The the DoS feature appears to be triggered in part by the number of open sessions. I could not determine if the lockout is triggered by the speed at which connection requests are made however.   Other Databases Unaffected This was interesting... the DoS feature works at the database level. Other databases were available for me to use.   Just Wait Initially I thought that going through SQL Azure and connecting from there would reset the database and allow me to connect again. Unfortunately this doesn't seem to be the case. You will have to wait. And the more you lock yourself out, the more you will have to wait... The first time the database became available again within 30 seconds or so; the second time within 2-3 minutes and the third time... within 2-3 hours...   Successful Logins The DoS feature appears to engage only for valid logins. If you have a login failure, it doesn't seem to count. I ran a test with over 100 login failures without being locked.

    Read the article

  • How are Reads Distributed in a Workload

    - by Bill Graziano
    People have uploaded nearly one millions rows of trace data to TraceTune.  That’s enough data to start to look at the results in aggregate.  The first thing I want to look at is logical reads.  This is the easiest metric to identify and fix. When you upload a trace, I rank each statement based on the total number of logical reads.  I also calculate each statement’s percentage of the total logical reads.  I do the same thing for CPU, duration and logical writes.  When you view a statement you can see all the details like this: This single statement consumed 61.4% of the total logical reads on the system while we were tracing it.  I also wanted to see the distribution of reads across statements.  That graph looks like this: On average, the highest ranked statement consumed just under 50% of the reads on the system.  When I tune a system, I’m usually starting in one of two modes: this “piece” is slow or the whole system is slow.  If a given piece (screen, report, query, etc.) is slow you can usually find the specific statements behind it and tune it.  You can make that individual piece faster but you may not affect the whole system. When you’re trying to speed up an entire server you need to identity those queries that are using the most disk resources in aggregate.  Fixing those will make them faster and it will leave more disk throughput for the rest of the queries. Here are some of the things I’ve learned querying this data: The highest ranked query averages just under 50% of the total reads on the system. The top 3 ranked queries average 73% of the total reads on the system. The top 10 ranked queries average 91% of the total reads on the system. Remember these are averages across all the traces that have been uploaded.  And I’m guessing that people mainly upload traces where there are performance problems so your mileage may vary. I also learned that slow queries aren’t the problem.  Before I wrote ClearTrace I used to identify queries by filtering on high logical reads using Profiler.  That picked out individual queries but those rarely ran often enough to put a large load on the system. If you look at the execution count by rank you’d see that the highest ranked queries also have the highest execution counts.  The graph would look very similar to the one above but flatter.  These queries don’t look that bad individually but run so often that they hog the disk capacity. The take away from all this is that you really should be tuning the top 10 queries if you want to make your system faster.  Tuning individually slow queries will help those specific queries but won’t have much impact on the system as a whole.

    Read the article

  • Zenoss Setup for Windows Servers

    - by Jay Fox
    Recently I was saddled with standing up Zenoss for our enterprise.  We're running about 1200 servers, so manually touching each box was not an option.  We use LANDesk for a lot of automated installs and patching - more about that later.The steps below may not necessarily have to be completed in this order - it's just the way I did it.STEP ONE:Setup a standard AD user.  We want to do this so there's minimal security exposure.  Call the account what ever you want "domain/zenoss" for our examples.***********************************************************STEP TWO:Make the following local groups accessible by your zenoss account.Distributed COM UsersPerformance Monitor UsersEvent Log Readers (which doesn't exist on pre-2008 machines)Here's the Powershell script I used to setup access to these local groups:# Created to add Active Directory account to local groups# Must be run from elevated prompt, with permissions on the remote machine(s).# Create txt file should contain the names of the machines that need the account added, one per line.# Script will process machines line by line.foreach($i in (gc c:\tmp\computers.txt)){# Add the user to the first group$objUser=[ADSI]("WinNT://domain/zenoss")$objGroup=[ADSI]("WinNT://$i/Distributed COM Users")$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)# Add the user to the second group$objUser=[ADSI]("WinNT://domain/zenoss")$objGroup=[ADSI]("WinNT://$i/Performance Monitor Users")$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)# Add the user to the third group - Group doesn't exist on < Server 2008#$objUser=[ADSI]("WinNT://domain/zenoss")#$objGroup=[ADSI]("WinNT://$i/Event Log Readers")#$objGroup.PSBase.Invoke("Add",$objUser.PSBase.Path)}**********************************************************STEP THREE:Setup security on the machines namespace so our domain/zenoss account can access itThe default namespace for zenoss is:  root/cimv2Here's the Powershell script:#Grant account defined below (line 11) access to WMI Namespace#Has to be run as account with permissions on remote machinefunction get-sid{Param ($DSIdentity)$ID = new-object System.Security.Principal.NTAccount($DSIdentity)return $ID.Translate( [System.Security.Principal.SecurityIdentifier] ).toString()}$sid = get-sid "domain\zenoss"$SDDL = "A;;CCWP;;;$sid" $DCOMSDDL = "A;;CCDCRP;;;$sid"$computers = Get-Content "c:\tmp\computers.txt"foreach ($strcomputer in $computers){    $Reg = [WMIClass]"\\$strcomputer\root\default:StdRegProv"    $DCOM = $Reg.GetBinaryValue(2147483650,"software\microsoft\ole","MachineLaunchRestriction").uValue    $security = Get-WmiObject -ComputerName $strcomputer -Namespace root/cimv2 -Class __SystemSecurity    $converter = new-object system.management.ManagementClass Win32_SecurityDescriptorHelper    $binarySD = @($null)    $result = $security.PsBase.InvokeMethod("GetSD",$binarySD)    $outsddl = $converter.BinarySDToSDDL($binarySD[0])    $outDCOMSDDL = $converter.BinarySDToSDDL($DCOM)    $newSDDL = $outsddl.SDDL += "(" + $SDDL + ")"    $newDCOMSDDL = $outDCOMSDDL.SDDL += "(" + $DCOMSDDL + ")"    $WMIbinarySD = $converter.SDDLToBinarySD($newSDDL)    $WMIconvertedPermissions = ,$WMIbinarySD.BinarySD    $DCOMbinarySD = $converter.SDDLToBinarySD($newDCOMSDDL)    $DCOMconvertedPermissions = ,$DCOMbinarySD.BinarySD    $result = $security.PsBase.InvokeMethod("SetSD",$WMIconvertedPermissions)     $result = $Reg.SetBinaryValue(2147483650,"software\microsoft\ole","MachineLaunchRestriction", $DCOMbinarySD.binarySD)}***********************************************************STEP FOUR:Get the SID for our zenoss account.Powershell#Provide AD User get SID$objUser = New-Object System.Security.Principal.NTAccount("domain", "zenoss") $strSID = $objUser.Translate([System.Security.Principal.SecurityIdentifier]) $strSID.Value******************************************************************STEP FIVE:Modify the Service Control Manager to allow access to the zenoss AD account.This command can be run from an elevated command line, or through Powershellsc sdset scmanager "D:(A;;CC;;;AU)(A;;CCLCRPRC;;;IU)(A;;CCLCRPRC;;;SU)(A;;CCLCRPWPRC;;;SY)(A;;KA;;;BA)(A;;CCLCRPRC;;;PUT_YOUR_SID_HERE_FROM STEP_FOUR)S:(AU;FA;KA;;;WD)(AU;OIIOFA;GA;;;WD)"******************************************************************In step two the script plows through a txt file that processes each computer listed on each line.  For the other scripts I ran them on each machine using LANDesk.  You can probably edit those scripts to process a text file as well.That's what got me off the ground monitoring the machines using Zenoss.  Hopefully this is helpful for you.  Watch the line breaks when copy the scripts.

    Read the article

  • I cannot remove/install software some dependency issue?

    - by Ryuzaki
    I'm having difficulty trying to install/uninstall applications in Ubuntu 12.04 LTS. I have this warning error that appears on my desktop in the form of a red icon with a line through it and it states: An error occurred, please run Package Manager from the right-click menu or apt-get in a terminal to see what is wrong. The error message was: 'Error: BrokenCount 0' — This usually means that your installed packages have unmet dependencies. I tried to repair automatically through ubuntu software center and I kept getting errors or it just didn't seem to work. Afterwards, I opened terminal and used sudo apt-get check command to see what the problem was and the results yielded: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libjack-jackd2-0 : Breaks: libjack-jackd2-0:i386 (!= 1.9.8~dfsg.1-1ubuntu1) but 1.9.8~dfsg.2-1precise1 is installed libjack-jackd2-0:i386 : Breaks: libjack-jackd2-0 (!= 1.9.8~dfsg.2-1precise1) but 1.9.8~dfsg.1-1ubuntu1 is installed E: Unmet dependencies. Try using -f. After I ran sudo apt-get -f install (in an attempt to fix the issue at hand), I encountered the following errors/code: okudaira@haru-kano:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libjack-jackd2-0 Suggested packages: jackd2 The following packages will be upgraded: libjack-jackd2-0 1 upgraded, 0 newly installed, 0 to remove and 24 not upgraded. 1 not fully installed or removed. Need to get 0 B/197 kB of archives. After this operation, 3,072 B of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 181702 files and directories currently installed.) Preparing to replace libjack-jackd2-0 1.9.8~dfsg.1-1ubuntu1 (using .../libjack-jackd2-0_1.9.8~dfsg.2-1precise1_amd64.deb) ... Unpacking replacement libjack-jackd2-0 ... dpkg: error processing /var/cache/apt/archives/libjack-jackd2-0_1.9.8~dfsg.2-1precise1_amd64.deb (--unpack): './usr/share/doc/libjack-jackd2-0/buildinfo.gz' is different from the same file on the system dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/libjack-jackd2-0_1.9.8~dfsg.2-1precise1_amd64.deb localepurge: Disk space freed in /usr/share/locale: 0 KiB localepurge: Disk space freed in /usr/share/man: 0 KiB localepurge: Disk space freed in /usr/share/gnome/help: 0 KiB localepurge: Disk space freed in /usr/share/omf: 0 KiB Total disk space freed by localepurge: 0 KiB E: Sub-process /usr/bin/dpkg returned an error code (1) The real problem I'm having is interpreting what is being stated. I'm fairly new to the ubuntu experience so I'm not very well-versed with the terminology and the entire in's and out's. Can someone tell me what's wrong with my system? I can no longer install/remove any programs at all.

    Read the article

  • determine if udp socket can be accessed via external client

    - by JohnMerlino
    I don't have access to company firewall server. but supposedly the port 1720 is open on my one ubuntu server. So I want to test it with netcat: sudo nc -ul 1720 The port is listening on the machine ITSELF: sudo netstat -tulpn | grep nc udp 0 0 0.0.0.0:1720 0.0.0.0:* 29477/nc The port is open and in use on the machine ITSELF: lsof -i -n -P | grep 1720 gateway 980 myuser 8u IPv4 187284576 0t0 UDP *:1720 Checked the firewall on current server: sudo ufw allow 1720/udp Skipping adding existing rule Skipping adding existing rule (v6) sudo ufw status verbose | grep 1720 1720/udp ALLOW IN Anywhere 1720/udp ALLOW IN Anywhere (v6) But I try echoing data to it from another computer (I replaced the x's with the real integers): echo "Some data to send" | nc xx.xxx.xx.xxx 1720 But it didn't write anything. So then I try with telnet from the other computer as well: telnet xx.xxx.xx.xxx 1720 Trying xx.xxx.xx.xxx... telnet: connect to address xx.xxx.xx.xxx: Operation timed out telnet: Unable to connect to remote host Although I don't think telnet works with udp sockets. I ran nmap from another computer within the same local network and this is what I got: sudo nmap -v -A -sU -p 1720 xx.xxx.xx.xx Starting Nmap 5.21 ( http://nmap.org ) at 2013-10-31 15:41 EDT NSE: Loaded 36 scripts for scanning. Initiating Ping Scan at 15:41 Scanning xx.xxx.xx.xx [4 ports] Completed Ping Scan at 15:41, 0.10s elapsed (1 total hosts) Initiating Parallel DNS resolution of 1 host. at 15:41 Completed Parallel DNS resolution of 1 host. at 15:41, 0.00s elapsed Initiating UDP Scan at 15:41 Scanning xtremek.com (xx.xxx.xx.xx) [1 port] Completed UDP Scan at 15:41, 0.07s elapsed (1 total ports) Initiating Service scan at 15:41 Initiating OS detection (try #1) against xtremek.com (xx.xxx.xx.xx) Retrying OS detection (try #2) against xtremek.com (xx.xxx.xx.xx) Initiating Traceroute at 15:41 Completed Traceroute at 15:41, 0.01s elapsed NSE: Script scanning xx.xxx.xx.xx. NSE: Script Scanning completed. Nmap scan report for xtremek.com (xx.xxx.xx.xx) Host is up (0.00013s latency). PORT STATE SERVICE VERSION 1720/udp closed unknown Too many fingerprints match this host to give specific OS details Network Distance: 1 hop TRACEROUTE (using port 1720/udp) HOP RTT ADDRESS 1 0.13 ms xtremek.com (xx.xxx.xx.xx) Read data files from: /usr/share/nmap OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 2.04 seconds Raw packets sent: 27 (2128B) | Rcvd: 24 (2248B). The only thing I can think of is a firewall or vpn issue. Is there anything else I can check for before requesting that they look at the firewall server again?

    Read the article

  • Getting a virus is *very* annoying

    - by bconlon
    I spent most of yesterday removing an annoying virus from my PC. I feel slightly foolish for getting one in the first place, but after so many years I guess I was always going to eventually succumb. I was also a little surprised at the failure of various tools at removing it. The virus would redirect the browser to websites including ‘licosearch’, ‘hugosearch’ and ‘facebook’, and the disk would be thrashing away infecting dlls in some way. I had the full up to date version of McAfee installed. This identified that there was an issue in some dlls on the system and was able to ‘fix’ them. But they kept getting re-infected. So I installed Microsoft Security Essentials and this too was able to identify and ‘fix’ the infected dlls. The system scans take forever and I really expected better results. I also tried Malwarebytes, Hitman Pro, AVG and Sophos to no avail. Eventually I thought I’d investigate myself. It turned out that on reboot, the virus would start 3 instances of Firefox.exe which I’m guessing would do bad things including infecting as many dlls on the system as possible. I removed Firefox and the virus cleverly then launched 3 instances of Chrome! So I uninstalled Chrome and yes, it then started to launch 3 instances of iexplore.exe. If I’m honest, by this stage I was just seeing if it would be able to use any of the browsers! As it was starting these on reboot, I looked in my User Startup folder and there was a <randomly named>.exe and several log files. I deleted these and rebooted. When I looked they had been recreated. So I then looked in the registry Run and RunOnce entries: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run. Sure enough there were entries to run a file in C:\Program Files\<random name folder>\<random name file>.exe. I deleted this and rebooted and it was fixed. I also looked in the event log and found a warning that Winlogon had failed to start the file C:\Program Files\<random name folder>\<random name file>.exe So I also checked HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon and this entry had also been changed. Finally I ran a full system scan to clean up any infected dlls. I hope it’s gone for good!  #

    Read the article

  • External hard drive issue

    - by Blind Fish
    I am running Ubuntu 12.04 and I have a 500GB external hard drive, formatted in ext4, which I have used for about a year to store batches of data that I am sifting through. About once a week I move a chunk of data from the external drive to my internal drive for processing. As I do that I delete the data from the external drive and empty the trash, thereby clearing up space on the external and also preventing myself from grabbing stuff that I've already sorted through. The vast majority of this is text files, but there are a few .jpegs and .mp4s thrown in, if that matters. Anyway, this has been working without a hitch for nearly a year. So today I plug in the drive and I have an odd issue. I was able to move folders from the external over to my internal drive with no problem, but when I tried to delete those files I was unable to do so. I kept getting the "unable to send file to trash, do you wish to delete permanently?" message. I clicked yes / delete all, but no files were actually deleted. It just sat there. Even worse, while the system was trying in vain to delete those files I was unable to move more files over. In short, I was stuck. I canceled the operation, unmounted and remounted the drive, and then I had an even bigger problem. I have a spreadsheet of all the different folders on there ( exactly 818 ) and yet when I opened the drive in Nautilus, it was only finding 512 folders. So I had 306 missing folders, but my free space was unchanged. Immediately I think that the drive may be corrupted. I went into Disk Utility and ran the check disk option. It said that the drive was NOT clean, but offered no remedy or option to repair. I went back into the drive in Nautilus and once again attempted to delete the files I had already moved. Same issue. I clicked on Details and it said that it was unable to create a trashing file I/O error. I've looked, and it's not a permissions issue. The drive and all folders that are in it all belong to me, and they are all read / write. I've started running badblocks on the drive, but it looks like that is going to take several hours to complete. Any ideas?

    Read the article

  • Speaker Notes...

    - by wulfers
    At a .Net User Group meeting this week, I experienced two poorly prepared speakers floundering through presentations….  As a Lead Technologist at the company I work for, I have experience training technical staff and also giving presentations at code camps.  Here are a few guidelines for aspiring speakers you might find helpful…   1.       Do not stand in front of your audience and read your slides.  This is  offensive to your audience and not what they came for...  Your slides are there to reinforce the information you are presenting and to give the audience a little clarification on some terms you may use and as a visual aid for some complicated issues. 2.       Have someone review your presentation (slides, notes, …) who speaks the language you will be presenting in fluently.  Also record at least ten minutes of your presentation and have that same person review that.  One of the speakers this week used the word “Basically” fifty times in less than thirty minutes…  I started to flinch every time he used the term. 3.       Be Prepared  -  before the presentation begins.  Don’t make any last minute changes to your presentation or demo code the night before.  Don’t patch your laptop or demo servers the night before.  If possible create a virtual image that you only use for presentations and use that (refreshed before every presentation). 4.       Know the level of expertise of your audience.  Speaking above or below their abilities will make or break your presentation. 5.       Deliver what you promise. The presentation this week was supposed to be on BDD (Behavior Driven Develpment).  The presenter completely ran off track and 90% of the discussion was how his team mistakenly used TDD (Test Driven Development), and was unhappy with the results.  Based on his loss of focus we only heard a rushed 10 minute presentation on DBB which was a disservice to the audience. 6.       Practice your presentation with your own small team before you try this on a room full of people you don’t know.  A side benefit of doing this with your own team is that you can get candid feedback from your team and also get kudos for training your own team.  I find I can also turn my presentations into technical white papers and get a third benefit from the work I’ve put into a presentation. 7.       Sharpen your own saw.  Pick a topic that is fairly current.  Something you would like to learn about and would benefit your current career path. 8.       Have fun doing it.

    Read the article

  • OpenGL ES 2 jittery camera movement

    - by user16547
    First of all, I am aware that there's no camera in OpenGL (ES 2), but from my understanding proper manipulation of the projection matrix can simulate the concept of a camera. What I'm trying to do is make my camera follow my character. My game is 2D, btw. I think the principle is the following (take Super Mario Bros or Doodle Jump as reference - actually I'm trying to replicate the mechanics of the latter): when the caracter goes beyond the center of the screen (in the positive axis/direction), update the camera to be centred on the character. Else keep the camera still. I did accomplish that, however the camera movement is noticeably jittery and I ran out of ideas how to make it smoother. First of all, my game loop (following this article): private int TICKS_PER_SECOND = 30; private int SKIP_TICKS = 1000 / TICKS_PER_SECOND; private int MAX_FRAMESKIP = 5; @Override public void run() { loops = 0; if(firstLoop) { nextGameTick = SystemClock.elapsedRealtime(); firstLoop = false; } while(SystemClock.elapsedRealtime() > nextGameTick && loops < MAX_FRAMESKIP) { step(); nextGameTick += SKIP_TICKS; loops++; } interpolation = ( SystemClock.elapsedRealtime() + SKIP_TICKS - nextGameTick ) / (float)SKIP_TICKS; draw(); } And the following code deals with moving the camera. I was unsure whether to place it in step() or draw(), but it doesn't make a difference to my problem at the moment, as I tried both and neither seemed to fix it. center just represents the y coordinate of the centre of the screen at any time. Initially it is 0. The camera object is my own custom "camera" which basically is a class that just manipulates the view and projection matrices. if(character.getVerticalSpeed() >= 0) { //only update camera if going up float[] projectionMatrix = camera.getProjectionMatrix(); if( character.getY() > center) { center += character.getVerticalSpeed(); cameraBottom = center + camera.getBottom(); cameraTop = center + camera.getTop(); Matrix.orthoM(projectionMatrix, 0, camera.getLeft(), camera.getRight(), center + camera.getBottom(), center + camera.getTop(), camera.getNear(), camera.getFar()); } } Any thought about what I should try or what I am doing wrong? Update 1: I think I updated every value you can see on screen to check whether the jittery movement is affected by that, but nothing changed, so something must be fundamentally flawed with my approach/calculations.

    Read the article

  • Fuji camera "mounts" but folder not in Dolphin After Kubuntu 13.10 upgrade

    - by user207207
    Fuji camera mount reported in attached devices but not visible in Dolphin After Kubuntu 13.10 upgrade Have reinstalled the driver, and a few other suggestions, for other cameras mounts failing on previous Ubuntu upgrades. I have already spent a couple of hours trying to get my photo's off the camera, very annoying. Worked perfectly in 11.04, 11.10, 12.04, 12.10 and 13.04. dmesg | tail; lsusb; lsb_release -a [ 6181.858786] CPUM: APIC 03 at 00000000fee00000 (mapped at ffffc90009400000) - ver 0x80050010, lint0=0x10700 lint1=0x10400 pc=0x00400 thmr=0x10000 [17261.396236] CPUM: APIC 00 at 00000000fee00000 (mapped at ffffc90000c6a000) - ver 0x80050010, lint0=0x10700 lint1=0x00400 pc=0x00400 thmr=0x10000 [17261.396239] CPUM: APIC 03 at 00000000fee00000 (mapped at ffffc90000c72000) - ver 0x80050010, lint0=0x10700 lint1=0x10400 pc=0x00400 thmr=0x10000 [17261.396241] CPUM: APIC 02 at 00000000fee00000 (mapped at ffffc90000c70000) - ver 0x80050010, lint0=0x10700 lint1=0x10400 pc=0x00400 thmr=0x10000 [17261.396255] CPUM: APIC 01 at 00000000fee00000 (mapped at ffffc90000c6e000) - ver 0x80050010, lint0=0x10700 lint1=0x10400 pc=0x00400 thmr=0x10000 [32456.884907] usb 2-5: new high-speed USB device number 2 using ehci-pci [32457.654046] usb 2-5: New USB device found, idVendor=04cb, idProduct=01e8 [32457.654050] usb 2-5: New USB device strings: Mfr=0, Product=2, SerialNumber=3 [32457.654052] usb 2-5: Product: Digital Camera [32457.654053] usb 2-5: SerialNumber: 4C3230302020091117CAA59WP18548 Bus 002 Device 002: ID 04cb:01e8 Fuji Photo Film Co., Ltd Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 003: ID 2013:024f PCTV Systems nanoStick T2 290e Bus 001 Device 002: ID 046d:082d Logitech, Inc. HD Pro Webcam C920 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub No LSB modules are available. Distributor ID: Ubuntu vissibleDescription: Ubuntu 13.10 Release: 13.10 Codename: saucy sudo apt-get install gvfs-bin gvfs-mount gphoto2://[usb:002,002] Error mounting location: Error initializing camera: -108: No such file or directory ...... I have reported a bug in Dolphin, which has been transferred to Solid. Further information : I ran solid-hardware list details udi = '/org/kde/solid/udev/sys/devices/pci0000:00/0000:00:04.1/usb2/2-5' parent = '/org/kde/solid/udev' (string) vendor = '04cb' (string) product = 'Digital Camera' (string) description = 'Camera' (string) Block.major = 189 (0xbd) (int) Block.minor = 137 (0x89) (int) Block.device = '/dev/bus/usb/002/010' (string) Camera.supportedProtocols = {'ptp'} (string list) Camera.supportedDrivers = {'gphoto'} (string list) I still can't get my photo's off, I can see the folders using the Gimp menu. If anyone has got any ideas, I'm willing to try them.

    Read the article

  • Nvidia driver overscan issue second monitor via dvi-d cable

    - by benmichael
    Ok, I know that I have a bit of a bizarre setup, but here goes. I have an old laptop, HP Pavilion 6000. The graphics card in there is a GeForce 7150M. The monitor connection is an old 18pin. The external monitor I use is a Samsung SyncMaster 2333. Don't ask me why, but this monitor only has a dvi-d connection (yes, i have searched it). So I have the monitor plugged into the laptop. If I use any of the Nvidia propriety drivers and try to set the resolution up to 1920x1080 (the monitor's native resolution), I get a massive overscan issue. Over the years I have tried to get this to work, tinkering with my xorg.conf to death. I have also tried this on every Ubuntu since 10.04, on all the corresponding LUbuntus, and on all the Linux Mints since Lisa. Exact same issue. I have even tried it in WinDoze and it works perfectly there (although I did get the error once, but was unable to reproduce it). Using the Open Source drivers it works perfectly iff I switch off the laptop monitor (this makes no difference with the Nvidia drivers). I would have happily gone on using the Open Source drivers, except that since upgrading to LUbuntu 12.10, the Open Source drivers make my monitor completely hazy and have the same overscan issue until I (through the haze, only because I know where things are) go to the monitor settings, activate the laptop's monitor, then deactivate it, and suddenly it comes right. I have to do this every time. So I have to find a way to fix one of them, so I may as well tackle the propriety drivers, hence this overlong question. Amidst other things, I have tried the nvidia-settings, but because it is connected to an 18pin, it detects the monitor as a vga monitor and does not give me overscan correction options. I have tried custom modlines (although there are always more of those try), I have tried using xrandr, and I have tried all the FlatPanelOptions. What I have not tried is a Gentoo build, as I don't have time any more to do that installation, but up to about three years ago when I ran Gentoo exclusively I did not have this issue. Below is an link to an image with a red highlight around the portion of the screen visible to me, the numbers around it are the number of pixels which are cut off. This does seem to drift a few pixels every now and then. Thanks in advance. Nvidia driver issue image

    Read the article

  • Still prompted for a password after adding SSH public key to a server

    - by Nathan Arthur
    I'm attempting to setup a git repository on my Dreamhost web server by following the "Setup: For the Impatient" instructions here. I'm having difficulty setting up public key access to the server. After successfully creating my public key, I ran the following command: cat ~/.ssh/[MY KEY].pub | ssh [USER]@[MACHINE] "mkdir ~/.ssh; cat >> ~/.ssh/authorized_keys" ...replacing the appropriate placeholders with the correct values. Everything seemed to go through fine. The server asked for my password, and, as far as I can tell, executed the command. There is indeed a ~/.ssh/authorized_keys file on the server. The problem: When I try to SSH into the server, it still asks for my password. My understanding is that it shouldn't be asking for my password anymore. What am I missing? EDIT: SSH -v Log: Macbook:~ michaeleckert$ ssh -v [USER]@[SERVER URL] OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 53: Applying options for * debug1: Connecting to [SERVER URL] [[SERVER IP]] port 22. debug1: Connection established. debug1: identity file /Users/michaeleckert/.ssh/id_rsa type -1 debug1: identity file /Users/michaeleckert/.ssh/id_rsa-cert type -1 debug1: identity file /Users/michaeleckert/.ssh/id_dsa type -1 debug1: identity file /Users/michaeleckert/.ssh/id_dsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.5p1 Debian-6+squeeze3 debug1: match: OpenSSH_5.5p1 Debian-6+squeeze3 pat OpenSSH_5* debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA [STRING OF NUMBERS AND LETTERS SEPARATED BY SEMI-COLONS] debug1: Host ‘[SERVER URL]' is known and matches the RSA host key. debug1: Found key in /Users/michaeleckert/.ssh/known_hosts:2 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /Users/michaeleckert/.ssh/id_rsa debug1: Trying private key: /Users/michaeleckert/.ssh/id_dsa debug1: Next authentication method: password [USER]@[SERVER URL]'s password: debug1: Authentication succeeded (password). Authenticated to [SERVER URL] ([[SERVER IP]]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_US.UTF-8 Welcome to [SERVER URL] Any malicious and/or unauthorized activity is strictly forbidden. All activity may be logged by DreamHost Web Hosting. Last login: Sun Nov 3 12:04:21 2013 from [MY IP] [[SERVER NAME]]$

    Read the article

  • SharePoint 2010 HierarchicalConfig Caching Problem

    - by Damon
    We've started using the Application Foundations for SharePoint 2010 in some of our projects at work, and I came across a nasty issue with the hierarchical configuration settings.  I have some settings that I am storing at the Farm level, and as I was testing my code it seemed like the settings were not being saved - at least that is what it appeared was the case at first.  However, I happened to reset IIS and the settings suddenly appeared.  Immediately, I figured that it must be a caching issue and dug into the code base.  I found that there was a 10 second caching mechanism in the SPFarmPropertyBag and the SPWebAppPropertyBag classes.  So I ran another test where I waited 10 seconds to make sure that enough time had passed to force the caching mechanism to reset the data.  After 10 minutes the cache had still not cleared.  After digging a bit further, I found a double lock check that looked a bit off in the GetSettingsStore() method of the SPFarmPropertyBag class: if (_settingStore == null || (DateTime.Now.Subtract(lastLoad).TotalSeconds) > cacheInterval)) { //Need to exist so don't deadlock. rrLock.EnterWriteLock(); try { //make sure first another thread didn't already load... if (_settingStore == null) { _settingStore = WebAppSettingStore.Load(this.webApplication); lastLoad = DateTime.Now; } } finally { rrLock.ExitWriteLock(); } } What ends up happening here is the outer check determines if the _settingStore is null or the cache has expired, but the inner check is just checking if the _settingStore is null (which is never the case after the first time it's been loaded).  Ergo, the cached settings are never reset.  The fix is really easy, just add the cache checking back into the inner if statement. //make sure first another thread didn't already load... if (_settingStore == null || (DateTime.Now.Subtract(lastLoad).TotalSeconds) > cacheInterval) { _settingStore = WebAppSettingStore.Load(this.webApplication); lastLoad = DateTime.Now; } And then it starts working just fine. as long as you wait at least 10 seconds for the cache to clear.

    Read the article

  • Wifi problems after upgrading to 13.10

    - by Simon
    I just upgraded to Ubuntu 13.10, but since the upgrade I don't have internet access via wifi anymore. I can: See networks Connect to a network Ping myself (localhost, 192.168.0.103) I can't: Ping others (including other devices on the same wireless network, including the gateway/router) Resolve hosts Access any other external resource, whether on my own network or on the internet Using Wireshark, I noticed my computer is continuously sending ARP-requests like "Who has 192.168.0.1 [which is the gateway]? Tell 192.168.0.103". It doesn't get any replies though. When I ping another IP-address for which it knows the mac-address (from cache), it turns out a packet loss of 90% occurs, and even if a packet manages to arrive it takes around 3000ms. The output of route -n is: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth1 192.168.0.0 0.0.0.0 255.255.255.0 U 9 0 0 eth1 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 Before upgrading, wifi worked fine. Using other devices, wifi still works fine.Resetting the router didn't help. Ethernet still works after upgrading. Any suggestions? Update: I'm using the wl driver. Here's the relevant output of some commands: lspci | grep Wireless 03:00.0 Network controller: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter (rev 01) cat /etc/modprobe.d/blacklist.conf [...] blacklist mac80211 blacklist brcm80211 blacklist cfg80211 blacklist lib80211_crypt_tkip blacklist lib80211 blacklist b43 cat /etc/rc.local sudo modprobe -r lib80211 sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_wep.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_tkip.ko sudo insmod /lib/modules/3.2.0-30-generic-pae/kernel/net/wireless/lib80211_crypt_ccmp.ko sudo modprobe wl exit 0 The last lines are probably how I got wireless working after the previous upgrade (wireless has been a problem after each upgrade). Update 2: added information about the exact hardware below. The hardware is an integrated device, so I ran lspci -nn | grep -i network. The output is: 03:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11bgn Wireless Network Adapter [14e4:4727] (rev 01)

    Read the article

  • Unintentional run-in with C# thread concurrency

    - by geekrutherford
    For the first time today we began conducting load testing on a ASP.NET application already in production. Obviously you would normally want to load test prior to releasing to a production environment, but that isn't the point here.   We ran a test which simulated 5 users hitting the application doing the same actions simultaneously. The first few pages visited seemed fine and then things just hung for a while before the test failed. While the test was running I was viewing the performance counters on the server noting that the CPU was consistently pegged at 100% until the testing tool gave up.   Fortunately the application logs all exceptions including those unhandled to the database (thanks to log4net). I checked the log and low and behold the error was:   System.ArgumentException: An item with the same key has already been added. (The rest of the stack trace intentionally omitted)   Since the code was running with debug on the line number where the exception occured was also provided. I began inspecting the code and almost immediately it hit me, the section of code responsible for the exception is trying to initialize a static class. My next question was how is this code being hit multiple times when I have a rudimentary check already in place to prevent this kind of thing (i.e. a check on a public variable of the static class before entering the initializing routine). The answer...the check fails because the value is not set before other threads have already made it through.   Not being one who consistently works with threading I wasn't quite sure how to handle this problem. Fortunately a co-worker recalled having to lock a section of code in the past but couldn't recall exactly how. After a quick search on Google the solution is as follows:   Object objLock = new Object(); lock(objLock) { //logic requiring lock }   The lock statement takes an object and tells the .NET runtime that the current thread has exclusive access while the code within brackets is executing. Once the code completes, the lock is released for another thread to utilize.   In my case, I only need to execute the inner code once to initialize my static class. So within the brackets I have a check on a public variable to prevent it from being initialized again.

    Read the article

  • Java Grade Calculator program for class kepp geting the error "Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException." [migrated]

    - by user2880621
    this is my first question I've posted. I've used this site to look at other questions to help with similar problems I've had. After some cursory looking I couldn't find quite the answer I was looking for so I decided to finally succumb and create an account. I am fairly new to java, only a few weeks into my first class. Anyway, my project is to create a program which takes any amount of students and their grades, and then assign them a letter grade. The catch is, however, that it is on a sort of curve and the other grades' letter are dependent on the the highest. Anything equal to or 10 points below the best grade is an a, anything 11-20 points below is a b, and so on. I am to use an array, but I get this error when ran "Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException." I will go ahead and post my code down below. Thanks for any advice you may be able to give. package grade.calculator; import java.util.Scanner; /** * * @author nichol57 */ public class GradeCalculator { /** * @param args the command line arguments */ public static void main(String[] args) { Scanner input = new Scanner(System.in); System.out.println("Enter the number of students"); int number = input.nextInt(); double[]grades = new double [number]; for (int J = number; J >=0; J--) { System.out.println("Enter the students' grades"); grades[J] = input.nextDouble(); } double best = grades[0]; for (int J = 1; J < number; J++) { if (grades[J] >= best){ best = grades[J]; } } for (int J = 0;J < number; J++){ if (grades[J] >= best - 10){ System.out.println("Student " + J + " score is " + grades[J] + " and grade is " + "A"); } else if (grades[J] >= best - 20){ System.out.println("Student " + J + " score is " + grades[J] + " and grade is " + "B"); } else if (grades[J] >= best - 30) { System.out.println("Student " + J + " score is " + grades[J] + " and grade is " + "C"); } else if (grades[J] >= best - 40) { System.out.println("Student " + J + " score is " + grades[J] + " and grade is " + "D"); } else { System.out.println("Student " + J + " score is " + grades[J] + " and grade is " + "F"); } } // end for loop for output }// end main method }

    Read the article

  • Does OO, TDD, and Refactoring to Smaller Functions affect Speed of Code?

    - by Dennis
    In Computer Science field, I have noticed a notable shift in thinking when it comes to programming. The advice as it stands now is write smaller, more testable code refactor existing code into smaller and smaller chunks of code until most of your methods/functions are just a few lines long write functions that only do one thing (which makes them smaller again) This is a change compared to the "old" or "bad" code practices where you have methods spanning 2500 lines, and big classes doing everything. My question is this: when it call comes down to machine code, to 1s and 0s, to assembly instructions, should I be at all concerned that my class-separated code with variety of small-to-tiny functions generates too much extra overhead? While I am not exactly familiar with how OO code and function calls are handled in ASM in the end, I do have some idea. I assume that each extra function call, object call, or include call (in some languages), generate an extra set of instructions, thereby increasing code's volume and adding various overhead, without adding actual "useful" code. I also imagine that good optimizations can be done to ASM before it is actually ran on the hardware, but that optimization can only do so much too. Hence, my question -- how much overhead (in space and speed) does well-separated code (split up across hundreds of files, classes, and methods) actually introduce compared to having "one big method that contains everything", due to this overhead? UPDATE for clarity: I am assuming that adding more and more functions and more and more objects and classes in a code will result in more and more parameter passing between smaller code pieces. It was said somewhere (quote TBD) that up to 70% of all code is made up of ASM's MOV instruction - loading CPU registers with proper variables, not the actual computation being done. In my case, you load up CPU's time with PUSH/POP instructions to provide linkage and parameter passing between various pieces of code. The smaller you make your pieces of code, the more overhead "linkage" is required. I am concerned that this linkage adds to software bloat and slow-down and I am wondering if I should be concerned about this, and how much, if any at all, because current and future generations of programmers who are building software for the next century, will have to live with and consume software built using these practices. UPDATE: Multiple files I am writing new code now that is slowly replacing old code. In particular I've noted that one of the old classes was a ~3000 line file (as mentioned earlier). Now it is becoming a set of 15-20 files located across various directories, including test files and not including PHP framework I am using to bind some things together. More files are coming as well. When it comes to disk I/O, loading multiple files is slower than loading one large file. Of course not all files are loaded, they are loaded as needed, and disk caching and memory caching options exist, and yet still I believe that loading multiple files takes more processing than loading a single file into memory. I am adding that to my concern.

    Read the article

  • How to set up an rsync backup to Ubuntu securely?

    - by ws_e_c421
    I have been following various other tutorials and blog posts on setting up a Ubuntu machine as a backup "server" (I'll call it a server, but it's just running Ubuntu desktop) that I push new files to with rsync. Right now, I am able to connect to the server from my laptop using rsync and ssh with an RSA key that I created and no password prompt when my laptop is connected to my home router that the server is also connected to. I would like to be able to send files from my laptop when I am away from home. Some of the tutorials I have looked at had some brief suggestions about security, but they didn't focus on them. What do I need to do to let my laptop with send files to the server without making it too easy for someone else to hack into the server? Here is what I have done so far: Ran ssh-keygen and ssh-copy-id to create a key pair for my laptop and server. Created a script on the server to write its public ip address to a file, encrypt the file, and upload to an ftp server I have access to (I know I could sign up for a free dynamic DNS account for this part, but since I have the ftp account and don't really need to make the ip publicly accessible I thought this might be better). Here are the things I have seen suggested: Port forwarding: I know I need to assign the server a fixed ip address on the router and then tell the router to forward a port or ports to it. Should I just use port 22 or choose a random port and use that? Turn on the firewall (ufw). Will this do anything, or will my router already block everything except the port I want? Run fail2ban. Are all of those things worth doing? Should I do anything else? Could I set up the server to allow connections with the RSA key only (and not with a password), or will fail2ban provide enough protection against malicious connection attempts? Is it possible to limit the kinds of connections the server allows (e.g. only ssh)? I hope this isn't too many questions. I am pretty new to Ubuntu (but use the shell and bash scripts on OSX). I don't need to have the absolute most secure set up. I'd like something that is reasonably secure without being so complicated that it could easily break in a way that would be hard for me to fix.

    Read the article

  • Postgres - could not create any TCP/IP sockets

    - by Jacka
    I'm running a rails app in development with postgresql 9.3. When I tried to start passenger server today, I got: PG::ConnectionBad - could not connect to server: Connection refused Is the server running on host "localhost" (217.74.65.145) and accepting TCP/IP connections on port 5432? No big deal I thought, that happened before. Restarting postgres always solved the problem. So I ran sudo service postgresql restart and got: * Restarting PostgreSQL 9.3 database server * The PostgreSQL server failed to start. Please check the log output: 2014-06-11 10:32:41 CEST LOG: could not bind IPv4 socket: Cannot assign requested address 2014-06-11 10:32:41 CEST HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry. 2014-06-11 10:32:41 CEST WARNING: could not create listen socket for "localhost" 2014-06-11 10:32:41 CEST FATAL: could not create any TCP/IP sockets ...fail! My postgresql.conf points to the defaults: localhost and port 5432. I tried changing the port but the error message is the same (except the port change). Both ps aux | grep postgresql and ps aux | grep postmaster return nothing. EDIT: In postgresql.conf I changed listen_addresses to 127.0.0.1 instead of localhost and it did the trick, server restarted. I also had to edit my applications' db config and point to 127.0.0.1 instead of localhost. However, the question is now, why is localhost considered to be 217.74.65.145 and not 127.0.0.1? That's my /etc/hosts: 127.0.0.1 local 127.0.1.1 jacek-X501A1 127.0.0.1 something.name.non.example.com 127.0.0.1 company.something.name.non.example.com

    Read the article

  • How to import .ops file in Outlook 2007

    - by r0ca
    Hi all, I install ORK.exe on my Windows 7 machine to create a copy of my profile so I will have it as a template to push it to new user. So basically, I ran the Profile Wizard to create an .ops file. I saved it in my documents for the moment. Right now, I'm trying to import that file into my Outlook but I just can't figure it out! My goal is to have a profile template to push to new users when they log in for first time. Thanks a bunch in advance! EDIT: I just figured it out. But I have a problem: When I create my profile, Outlook 2007 on exchange was not in cache mode. So I created my profile with the Profile Wizard with cache mode disabled. I save my outlook.ops in My documents. Then, I enabled cache mode so when I will import my .ops file, it should be restore as cache mode disabled... But When I do that, cache mode is still checked (which I don't want to) Am I doing something wrong? I guess so but I just can't figure it out

    Read the article

  • Powerpoint missing from DCOM config

    - by Paul Prewett
    I have an application that automates the creation of powerpoint files in an ASP.NET environment. This requires that I install powerpoint on the server and also set permissions in the DCOM configuration snap-in (dcomcnfg) to give permissions to the launching user ([DOMAIN]\ASPNET in this case) to run the application. I have this setup running successfully on several Win2k3 machines. I am configuring my first Win2k8 machine and after installing powerpoint on the server, the "Microsoft Powerpoint Presentation" node in DCOM config is not showing up. Other installed Office apps are showing (Excel, Graph, etc...), just not Powerpoint. So when I attempt to run the application, I get an "Access denied" error, which is exactly what I would expect. The user doesn't have permission. Therefore, access denied. The specific error log entry is: The machine-default permission settings do not grant Local Activation permission for the COM Server application with CLSID {91493441-5A91-11CF-8700-00AA0060263B} to the user [DOMAIN]\ASPNET I searched the entire list for the CLSID, too, thinking maybe the name wasn't loading properly. No dice. I also re-ran the setup program for Office thinking maybe there would be some option or something I unchecked in the custom setup options, but I saw nothing that looked helpful. I'm flummoxed. Can anyone out there suggest something to help me get Powerpoint to show up in the list of DCOM applications? Many thanks.

    Read the article

  • Microsoft Standalone CA - Set expiration date of an individual request

    - by Hall72215
    I have set up a Microsoft Standalone CA on 2008 R2 as a root CA. I'm trying to setup a subordinate Enterprise CA. I generated the certificate request, and submitted it to the root CA. Then, I ran the following command to set the expiration date to 20 years (the request ID is 5): certutil -setattributes 5 "ValidityPeriod:Years\nValidityPeriodUnits:20" Then, I approved the request, but it failed. The Request Status Code is: The specified time is invalid. 0x8007076d (WIN32: 1901) The Request Disposition Message is: Denied by Policy Module 0x8007076d, The requested validity period is invalid. Confirm that the validity period or expiration data and time specified in the request does not extend beyond the validity period of the CA certificate, the certificate template, and the CA. The validity period of the CA can be verified by running the following commands: certutil -getreg ca\validityPeriod & certutil -getreg ca\ValidityPeriodUnits The validity period of the CA certificate is 40 years (expires in 2052). The template condition doesn't apply since this is a standalone CA. The result of those commands is Years and 1, respectively. It appears that I will need to change the CA's validityPeriod and validityPeriodUnits. But, I want to keep the default expiration for a request at 1 year. Is there a way to set a maximum and default expiration, or am I going to have to change it, issue the certificate, and then change it back?

    Read the article

  • What do "Unknown SSAP" and "Unknown DSAP" mean in tcpdump?

    - by lacker
    While trying to fix a problem with intermittently losing internet connection on a machine with a wireless connection to a router, I ran tcpdump and noticed packets with "Unknown SSAP" and "Unknown DSAP" errors coming at a rate of a few per second. 20:27:21.703178 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xe2 Information, send seq 0, rcv seq 16, Flags [Response], length 171 20:27:21.724726 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xe2 Information, send seq 0, rcv seq 16, Flags [Response], length 104 20:27:21.746449 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xe4 Information, send seq 0, rcv seq 16, Flags [Response], length 88 20:27:21.970963 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xe8 Information, send seq 0, rcv seq 16, Flags [Response], length 76 20:27:22.016565 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xea Information, send seq 0, rcv seq 16, Flags [Response], length 88 20:27:22.038471 00:24:a5:af:24:f6 (oui Unknown) Unknown SSAP 0xde > 1c:65:9d:48:38:95 (oui Unknown) Unknown DSAP 0xea Information, send seq 0, rcv seq 16, Flags [Response], length 171 What does the "Unknown SSAP" and "Unknown DSAP" mean, and does it indicate a problem?

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >