Search Results

Search found 582 results on 24 pages for 'aaron rodgers'.

Page 7/24 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Account Lockout with pam_tally2 in RHEL6

    - by Aaron Copley
    I am using pam_tally2 to lockout accounts after 3 failed logins per policy, however, the connecting user does not receive the error indicating pam_tally2's action. (Via SSH.) I expect to see on the 4th attempt: Account locked due to 3 failed logins No combination of required or requisite or the order in the file seems to help. This is under Red Hat 6, and I am using /etc/pam.d/password-auth. The lockout does work as expected but the user does not receive the error described above. This causes a lot of confusion and frustration as they have no way of knowing why authentication fails when they are sure they are using the correct password. Implementation follows NSA's Guide to the Secure Conguration of Red Hat Enterprise Linux 5. (pg.45) It's my understanding that that only thing changed in PAM is that /etc/pam.d/sshd now includes /etc/pam.d/password-auth instead of system-auth. If locking out accounts after a number of incorrect login attempts is required by your security policy, implement use of pam_tally2.so. To enforce password lockout, add the following to /etc/pam.d/system-auth. First, add to the top of the auth lines: auth required pam_tally2.so deny=5 onerr=fail unlock_time=900 Second, add to the top of the account lines: account required pam_tally2.so EDIT: I get the error message by resetting pam_tally2 during one of the login attempts. user@localhost's password: (bad password) Permission denied, please try again. user@localhost's password: (bad password) Permission denied, please try again. (reset pam_tally2 from another shell) user@localhost's password: (good password) Account locked due to ... Account locked due to ... Last login: ... [user@localhost ~]$

    Read the article

  • Chicago SQL Saturday

    - by Johnm
    This past Saturday, April 17, 2010, I journeyed North to the great city of Chicago for some SQL Server fun, learning and fellowship. The Chicago edition of this grassroots phenomenon was the 31st scheduled SQL Saturday since the program's birth in late 2007. The Chicago SQL Saturday consisted of four tracks with eight sessions each and was a very energetic and fast paced day for the 300+/- SQL Server enthusiasts in attendance. The speaker line up included national notables such as Kevin Kline, Brent Ozar, and Brad McGehee. My hometown of Indianapolis was well represented in the speaker line up with Arie Jones, Aaron King and Derek Comingore. The day began with a very humorous keynote by Kevin Kline and Brent Ozar who emphasized the importance of community events such as SQL Saturday and the monthly user group meetings. They also brilliantly included the impact that getting involved in the SQL community through social media can have on your professional career. My approach to the day was to try to experience as much of the event as I could, so there were very few sessions that I attended for their full duration. I leaped from session to session like a bumble bee, gleaning bits of nectar from each session. Amid these leaps I took the opportunity to briefly chat with some of the in-the-queue speakers as well as other attendees that wondered the hallways. I especially enjoyed a great discussion with Devin Knight about his plans regarding the upcoming Jacksonville SQL Saturday as well as an interesting SQL interpretation of the Iron Chef, which I think would catch on like wild-fire. There were two sessions that stood out as exceptional. So much so that I could not pull myself away: Kevin Kline presented on "SQL Server Internals and Architecture". This session could have been classified as one that is intended for the beginner. Kevin even personally warned me of such as I entered the room. I am a believer in revisiting the basics regardless of the level of your mastery, so I entered into this session in that spirit. It was a very clear and precise presentation. Masterfully illustrated and demonstrated. Brad McGehee presented on "How and When to Use Indexed Views". This was a topic that I was recently exploring and was considering to for use in an integration project. Brad effectively communicated the complexity of this feature and what is involved to gain their full benefit. It was clear at the conclusion of this session that it was not the right feature for my specific needs. Overall, the event was a great success. The use of volunteers, from an attendee's perspective was masterful. The only recommendation that I would have for the next Chicago SQL Saturday would be to include more time in between sessions to permit some level of networking among the attendees, one-on-one questions for speakers and visits to the sponsor booths. Congratulations to Wendy Pastrick, Ted Krueger, and Aaron Lowe for their efforts and a very successful SQL Saturday!

    Read the article

  • Core i5 Turboboost/C states Freezing computer

    - by Aaron Smith
    I'm not sure which, but I just had a heck of a time getting my computer to boot up and not freeze. It would run until it finished booting windows, then everything would freeze. This happened until I turned off turboboost and all the c states on the processor. What could be causing this? Is the processor going bad?

    Read the article

  • Google Chrome Extensions: Launch Event (part 4)

    Google Chrome Extensions: Launch Event (part 4) Video Footage from the Google Chrome Extensions launch event on 12/09/09. Aaron Boodman and Erik Kay, technical leads for the Google Chrome extensions team discuss the UI surfaces of Google Chrome extensions and the team's content not chrome philosophy. They also highlight the smooth, frictionless install and uninstall process for Google Chrome's extensions system and present the team's initiatives in the space of security and performance. From: GoogleDevelopers Views: 2963 12 ratings Time: 15:44 More in Science & Technology

    Read the article

  • apache2 VirtualHost in Mac OS X home directory

    - by aaron
    I am running Macports apache2 on Mac OS X 10.5. Whenever I configure a virtual host in the default folder, it works, however when I configure the virtual host in my home directory I get a "403 Forbidden" error. How do I configure a vhost in my home directory? Here is the configuration that yields "403 Forbidden" when I access "devel.mysite.com": /opt/local/apache2/conf/extra/httpd-vhosts.conf: DocumentRoot "/opt/local/apache2/htdocs" ServerName * #CustomLog "" common <VirtualHost *:80> #DocumentRoot "/opt/local/apache2/htdocs/mysite" DocumentRoot "/Users/myuser/Sites/mysite" ServerName devel.mysite.com </VirtualHost> The error message in /opt/local/apache2/logs/devel.mysite.com-error_log: [Sat Apr 17 19:54:49 2010] [error] [client 127.0.0.1] client denied by server configuration: /Users/myuser/Sites/mysite/ When I uncomment the line to make DocumentRoot in /opt/local/apache2/htdocs/mysite, it works: DocumentRoot "/opt/local/apache2/htdocs" ServerName * #CustomLog "" common <VirtualHost *:80> DocumentRoot "/opt/local/apache2/htdocs/mysite" #DocumentRoot "/Users/myuser/Sites" ServerName devel.mysite.com </VirtualHost> I get no errors or warnings when I start apache, and the only thing that is logged on startup is this (in /opt/local/apache/logs/error_log): [Sat Apr 17 19:56:29 2010] [notice] Digest: generating secret for digest authentication ... [Sat Apr 17 19:56:29 2010] [notice] Digest: done [Sat Apr 17 19:56:29 2010] [notice] Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8m DAV/2 configured -- resuming normal operations A few notes: * The permissions of /Home/myuser/Sites/mysite is 755, owned by myuser, group is staff * Everything else works as expected, until I move the ServerRoot of the vhost to the directory in my home

    Read the article

  • NHibernate 2 Beginner's Guide Book

    - by Ricardo Peres
    Packt Publishing has recently released a new book on NHibernate: NHibernate 2 Beginner's Guide, by Aaron Cure. I am now reading the final version, which Packt Publishing was kind enough to provide me, and I will soon write about it. I can tell you for now that Fabio Maulo was one of the reviewers, which certainly raises the expectations. In the meanwhile, there's a free chapter you can download, which hopefully will get you interested in it; you can get it from here.

    Read the article

  • ubuntu 9.10 cups server cupsd.conf

    - by aaron
    i have a cups server running on ubuntu 9.10 on my home network. right now i can access it at 192.168.1.101:631, but when i try to access it at myservername.local:631 i get a 400 Bad Request. here's the relevant section from my current cupsd.conf: ServerName 192.168.1.101 # Only listen for connections from the local machine. Listen localhost:631 Listen /var/run/cups/cups.sock # any of the below 'Listen' directives all yield the same result Listen 192.168.1.101:631 #Listen *:631 #Listen myservername.local:631 # Show shared printers on the local network. Browsing On BrowseOrder allow,deny BrowseAllow all BrowseLocalProtocols CUPS dnssd BrowseAddress 192.168.1.255 # Default authentication type, when authentication is required... DefaultAuthType Basic # Restrict access to the server... <Location /> Order deny,allow Deny from All Allow from 127.0.0.1 Allow from 192.168.1.* </Location> # Restrict access to the admin pages... <Location /admin> Order deny,allow Deny from All #Allow from 127.0.0.1 #Allow from 192.168.1.* </Location> # Restrict access to configuration files... <Location /admin/conf> AuthType Default Require user @SYSTEM Order deny,allow Deny from All #Allow from 127.0.0.1 #Allow from 192.168.1.* </Location> i get the following in /var/log/cups/error_log: E [03/Jan/2010:18:33:41 -0600] Request from "192.168.1.100" using invalid Host: field "myservername.local:631" what do i need to do to be able to access the cups server at both 192.168.1.101:631 and myservername.local:631?

    Read the article

  • An XEvent a Day (2 of 31) – Querying the Extended Events Metadata

    - by Jonathan Kehayias
    In yesterdays post, An Overview of Extended Events , I provided some of the necessary background for Extended Events that you need to understand to begin working with Extended Events in SQL Server. After receiving some feedback by email (thanks Aaron I appreciate it), I have changed the post naming convention associated with the post to reflect “2 of 31” instead of 2/31, which apparently caused some confusion in Paul Randal’s and Glenn Berry’s series which were mentioned in the round up post for...(read more)

    Read the article

  • AsteriskNow Migration / Shared Extension Space

    - by Aaron C. de Bruyn
    I am testing the possibility of migrating from an old Avaya phone system to AsteriskNow. The migration would cover several hundred phones--but spread out over several years. (Management wants to move buildings to the new phone system one by one as cables get cut or time permits.) Two other directive is that extensions must not change and they want a GUI that other admins (non-Linux geeks) can manage. They currently use 9XXX for all extensions. We linked the Avaya and Asterisk box via PRI card and they both are communicating. From the Avaya side, if we move (for example) extension 9001 to Asterisk, we forward the call over the PRI to the AsteriskNow box and the SIP phone rings. In AsteriskNow we have an outgoing rule '_9XXX' that routes all 4-digit extensions starting with 9 back to Avaya. Here's the trouble. Dialing 9001 (the extension moved over to AsteriskNow) causes the call to be routed out the PRI to the Avaya box, then the Avaya box routes the call back to Asterisk, and Asterisk routes it to the SIP phone. As we get more and more users switched over, it will use up more and more channels over the PRI card. Is there a way I can ask Asterisk to check it's local extensions first--then forward off to the Avaya system if it starts with '_9XXX'? (I know how I can do it when editing the raw config files, I'm just looking for a way to do it in the GUI so other admins can manage it if necessary.) As a last-ditch plan, I know I can specifically add '_9001' as an outgoing call rule and sent it directly to extension 9001--but I'd really hate to do that for several hundred phones

    Read the article

  • Five Things To Which SQL Server Should Say "Goodbye and Good Riddance"

    - by Adam Machanic
    I was tagged by master blogger Aaron Bertrand and asked to identify five things that should be removed from SQL Server. Easy enough, or so I thought... 1) Tempdb . But I should qualify that a bit. Tempdb is absolutely necessary for SQL Server to properly function, but in its current state is easily the number one bottleneck in the majority of SQL Server instances. Many other DBMS vendors abandoned the "monolithic, instance-scoped temporary data space" years ago, yet SQL Server soldiers on, putting...(read more)

    Read the article

  • Icinga error "Icinga Startup Delay does not exist" although it does

    - by aaron
    I just installed icinga to monitor my server following this guide: http://docs.icinga.org/0.8.1/en/wb_quickstart-idoutils.html Everything built and installed correctly, but icinga is reporting a critical error with the reason: "The command defined for service Icinga Startup Delay does not exist" However, I can see that ${ICINGA_BASE}/etc/objects/localhost.cfg contains: define service{ use local-service ; Name of service template to use host_name localhost service_description Icinga Startup Delay check_command check_icinga_startup_delay notifications_enabled 0 } and ${ICINGA_BASE}/etc/objects/commands.cfg contains: define command { command_name check_icinga_startup_delay command_line $USER1$/check_dummy 0 "Icinga started with $$(($EVENTSTARTTIME$-$PROCESSSTARTTIME$)) seconds delay | delay=$$(($EVENTSTARTTIME$-$PROCESSSTARTTIME$))" } both of these files had not been modified since the whole make/install process. I am running on Ubuntu 10.04, most recent build of icinga-core, and apache2 2.2.14 What must I do to tell Icinga that the command exists? Or is the problem that check_dummy does not exist? Where or how would I define that?

    Read the article

  • VMWare ESX Updates - Which to Apply?

    - by Aaron Alton
    Wondering what more experienced ESX admins typically do... I just brought our ESX hosts up to 3.5 Update 5 (Yes, I know we're behind still). I then applied the "Critical Host Updates" baseline in VMWare update manager, and found that we're still short on 14 "critical updates". My question is, do most people go ahead and apply any update flagged as critical, or do they evaluate each update one-by-one to determine whether or not the issue that has been addressed is likely to affect them. In the SQL Server world (my alma mater, so to speak), we regularly apply service packs, and sometimes cumulative updates, but we only apply hotfixes when the issue that they are targeted towards affects us. Does the same logic hold fast in VMWare land?

    Read the article

  • DNS Help (CNAMEs and A Records)

    - by Aaron Francis
    I'm trying to set up my DNS properly so that I can have hosting through PHPFog and email services using MailGun. PHPFog has us redirect the naked domain to the www and then use a CNAME to point the www to PHPFog and mailgun provides the MX records to use. The problem I'm having is that I have no A record set up on Hover because when I do, the CNAME no longer works (?), or at least it seems that way because I am no longer seeing my site from PHPFog, I'm seeing a Hover landing page. I know all the records I need, I just can't seem to get them to play nicely together. I've been told Amazon's Route 53 should be able to solve my problem, but I haven't yet figured out how. I just need to have hosting at PHPFog and email services through MailGun. As you can probably tell, I have only a very limited understanding of DNS, so forgive me if this is a silly question.

    Read the article

  • T-SQL Tuesday #31: Paradox of the Sawtooth Log

    - by merrillaldrich
    Today’s T-SQL Tuesday, hosted by Aaron Nelson ( @sqlvariant | sqlvariant.com ) has the theme Logging . I was a little pressed for time today to pull this post together, so this will be short and sweet. For a long time, I wondered why and how a database in Full Recovery Mode, which you’d expect to have an ever-growing log -- as all changes are written to the log file -- could in fact have a log usage pattern that looks like this: This graph shows the Percent Log Used (bold, red) and the Log File(s)...(read more)

    Read the article

  • GPO Startup script did not execute on some computers

    - by Aaron Ooi
    The GPO Startup scripts works fine on other machine but not for another half of the machine. gpresult show that GPO was there. I ran RSOP and it show that the Startup script was there but it was never executed. There nothing on application error or anything related to the failed execution in the event viewer. I have set to Allow slow network connection too but it did not help for the startup script to execute. Permission read/execute granted to Domain Computers & Authenticated Users Other GPO settings works except Startup Script did not execute. The scripts works fine as other machine which success without any issue except some machine. I need help to sort this out as it troubles me where another half of the machine did not execute the script at all. It was all WIndows 7.

    Read the article

  • Silverlight Cream for June 28, 2011 -- #1112

    - by Dave Campbell
    In this Issue: WindowsPhoneGeek, John Papa, Mike Taulty, Erno de Weerd, Stephen Price, Chris Rouw, Peter Kuhn, Damian Schenkelman, Michael Washington, and Manas Patnaik. Above the Fold: Silverlight: "Binding to View Model properties in Data Templates. The RootBinding Markup Extension" Damian Schenkelman WP7: "Storing Files in SQL Server using WCF RIA Services and Silverlight – Part 3" Chris Rouw LightSwitch: "Saving Files To File System With LightSwitch (Uploading Files)" Michael Washington Shoutouts: Steve Wortham announced a change to his XilverlightXAP.com site... they're now accepting XAML illustrations: Introducing XAML Illustrations, Increased Payouts to Contributors, and More Amid all the discussions that I've tried to avoid, Michael Washinton is Betting The House On LightSwitch From SilverlightCream.com: Dynamically updating a data bound LongListSelector in Windows Phone WindowsPhoneGeek's latest is on using the LongListSelector from the Toolkit and dynamically updating it with data... detailed guidelines and plenty of pictures and code as always. Silverlight TV 77: Exploring 3D with Aaron Oneal John Papa has Silverlight TV number 77 up and is chatting with Aaron Oneal, program manager of the Silverlight 3D efforts... too cool. Silverlight WebBrowser Control for Offline Apps (Part 2) Mike Taulty wrote this post in Silverlight 5 Beta, but says it should be fine in 4... a continuation of his HTML Content display using the WebBrowser control while offline Windows Phone 7: Databinding and the Pivot Control Erno de Weerd discusses the Pivot control in WP7 based on his attempts to use it in an app. Required Attribute on an Entity Stephen Price has a new post at XAML Source... first is this one on setting the required attribute and the troubles you can get into if it's not set correctly Storing Files in SQL Server using WCF RIA Services and Silverlight – Part 3 Chris Rouw has Part 3 of his series on Storing files in SQL Server using FILESTREAM Storage in SQL Server 2008 and Silverlight... this time he's viewing files stored in the FILESTREAM from the LOB app. Getting ready for the Windows Phone 7 Exam 70-599 (Part 4) Peter Kuhn has Part 4 of his series on getting ready for the WP7 exam up at SilverlightShow... the date is coming up soon... are you ready? Binding to View Model properties in Data Templates. The RootBinding Markup Extension Damian Schenkelman has a Silverlight 5 Beta post up... digging into the XAML Markup Extensions and popping out a RootBindingExtensionthat helps bind to a property in a view model from a DataTemplate. Saving Files To File System With LightSwitch (Uploading Files) Michael Washington has a cool tutorial up on his new LightSwitch Help Website... File Upload to a server file system using LightSwitch, plus a project to download... good stuff! Microsoft Media Platform (MMPPF): Player Framework for Silverlight Manas Patnaik's latest post is about the Media Player Project... some of the history of media with Silvelight and how to go about using the Media Player Project bits Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Bash Script - Traffic Shaping

    - by Craig-Aaron
    hey all, I was wondering if you could have a look at my script and help me add a few things to it, How do I get it to find how many active ethernet ports I have? and how do I filter more than 1 ethernet port How I get this to do a range of IP address? Once I have a few ethenet ports I need to add traffic control to each one #!/bin/bash # Name of the traffic control command. TC=/sbin/tc # The network interface we're planning on limiting bandwidth. IF=eth0 # Network card interface # Download limit (in mega bits) DNLD=10mbit # DOWNLOAD Limit # Upload limit (in mega bits) UPLD=1mbit # UPLOAD Limit # IP address range of the machine we are controlling IP=192.168.0.1 # Host IP # Filter options for limiting the intended interface. U32="$TC filter add dev $IF protocol ip parent 1:0 prio 1 u32" start() { # Hierarchical Token Bucket (HTB) to shape bandwidth $TC qdisc add dev $IF root handle 1: htb default 30 #Creates the root schedlar $TC class add dev $IF parent 1: classid 1:1 htb rate $DNLD #Creates a child schedlar to shape download $TC class add dev $IF parent 1: classid 1:2 htb rate $UPLD #Creates a child schedlar to shape upload $U32 match ip dst $IP/24 flowid 1:1 #Filter to match the interface, limit download speed $U32 match ip src $IP/24 flowid 1:2 #Filter to match the interface, limit upload speed } stop() { # Stop the bandwidth shaping. $TC qdisc del dev $IF root } restart() { # Self-explanatory. stop sleep 1 start } show() { # Display status of traffic control status. $TC -s qdisc ls dev $IF } case "$1" in start) echo -n "Starting bandwidth shaping: " start echo "done" ;; stop) echo -n "Stopping bandwidth shaping: " stop echo "done" ;; restart) echo -n "Restarting bandwidth shaping: " restart echo "done" ;; show) echo "Bandwidth shaping status for $IF:" show echo "" ;; *) pwd=$(pwd) echo "Usage: tc.bash {start|stop|restart|show}" ;; esac exit 0 thanks

    Read the article

  • Cent OS ifcfg configuration for ranges of IP's with different netmask

    - by Aaron Schlegel
    I have 1 set of 30 public IP's with a netmask of 255.255.255.0 and another set of 30 IP's with a netmask of 255.255.255.128. Both sets of IP's also have different gateways. How can I virtually assign the IP's to the machine? I have tried creating ifcfg-eth0:0 ifcfg-eth0:1 ifcfg-eth0:X ect for each IP. Below is my ifcfg file with. I have this for each IP with the correct gateway IP and netmask for each of my 60 IP's. If I do ip addr show it does show all of the 60 addresses with the correct broadcast IP and netmask. However I can only use 30 of my IP's that are from the same netmask. Am I doing this correctly? If the IP's show up with ip addr show does that mean I have correctly assigned them to the machine virtually? I want to check before I blame my hosting company for not routing the IP's correctly. DEVICE="eth0:1" BOOTPROTO="static" DNS1="**.**.**.**" DNS2="**.**.**.**" GATEWAY="2**.**.***.126" HOSTNAME="localhost.localdomain" HWADDR="0*:19:**:**:**:**" IPADDR="2**.*.**.**" IPV6INIT="no" MTU="1500" NETMASK="255.255.255.128" NM_CONTROLLED="yes" ONBOOT="yes" TYPE="Ethernet" Also is there a better way to do this? I have used ifcfg-eth0:0-range1 before to assign a range of IP's from the same netmask. Is it possible to do this with ranges with different netmask? Thanks!

    Read the article

  • Enabling publickey authentication for server's sshd

    - by aaron
    I have two servers running RHEL 5. Both have nearly identical configurations. I have set up RSA Publickey authetication on both, and one works but the other does not: [my_user@client] $ ssh my_user@server1 --- server1 MOTD Banner --- [my_user@server1] $ and on the other server: [my_user@client] $ ssh my_user@server2 my_user@server2's password: --- server2 MOTD Banner --- [my_user@server2] $ server2's /etc/ssh/sshd_config file snippet: RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys When I run ssh -vvv I get the following snippet: debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug3: Next authentication method: publickey debug1: Offering public key: /home/my_user/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentication that can continue: publickey,gssapi-with-mic,passowrd debug1: Offering public key: /home/my_user/.ssh/id_dsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentication that can continue: publickey,gssapi-with-mic,passowrd debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password my_user@server2's password:

    Read the article

  • glib2 64-bit compile fails on Solaris 10

    - by Aaron
    I'm encountering a problem building glib-2.26.1 on a Solaris 10 box - 64-bit. Goo diligence doesn't turn anything up, but no matter what I do the build fails in the same way. I've tried using the Sun Studio compiler, gcc (SFW) to no avail. When I compile I get the following error: [root@foo glib-2.26.1]$ export CC=/opt/solstudio12.2/bin/cc [root@foo glib-2.26.1]$ export CFLAGS="-m64" ...configure goes normally... [root@foo glib-2.26.1]$ make ...snip... source='gatomic.c' object='gatomic.lo' libtool=yes \ DEPDIR=.deps depmode=none /bin/bash ../depcomp \ /bin/bash ../libtool --tag=CC --mode=compile /opt/solstudio12.2/bin/cc -DHAVE_CONFIG_H -I. -I.. -I.. -I../glib -I../glib -I.. -DG_LOG_DOMAIN=\"GLib\" -DG_DISABLE_CAST_CHECKS -DG_DISABLE_DEPRECATED -DGLIB_COMPILATION -DPCRE_STATIC -DG_DISABLE_SINGLE_INCLUDES -D_REENTRANT -D_PTHREADS -m64 -c -o gatomic.lo gatomic.c libtool: compile: /opt/solstudio12.2/bin/cc -DHAVE_CONFIG_H -I. -I.. -I.. -I../glib -I../glib -I.. -DG_LOG_DOMAIN=\"GLib\" -DG_DISABLE_CAST_CHECKS -DG_DISABLE_DEPRECATED -DGLIB_COMPILATION -DPCRE_STATIC -DG_DISABLE_SINGLE_INCLUDES -D_REENTRANT -D_PTHREADS -m64 -c gatomic.c -KPIC -DPIC -o .libs/gatomic.o "gatomic.c", line 885: warning: no explicit type given "gatomic.c", line 885: syntax error before or at: * "gatomic.c", line 885: warning: old-style declaration or incorrect type for: g_atomic_mutex "gatomic.c", line 906: warning: implicit function declaration: g_mutex_lock "gatomic.c", line 909: warning: implicit function declaration: g_mutex_unlock "gatomic.c", line 1155: warning: implicit function declaration: g_mutex_new "gatomic.c", line 1155: warning: improper pointer/integer combination: op "=" cc: acomp failed for gatomic.c make[4]: *** [gatomic.lo] Error 1 make[4]: Leaving directory `/root/glib-2.26.1/glib' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/root/glib-2.26.1/glib' make[2]: *** [all] Error 2 make[2]: Leaving directory `/root/glib-2.26.1/glib' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/root/glib-2.26.1' make: *** [all] Error 2 Does anyone know where the build might be going wrong? Not sure where else to look here. Thanks.

    Read the article

  • Google Chrome Extensions: Launch Event (part 2)

    Google Chrome Extensions: Launch Event (part 2) Video Footage from the Google Chrome Extensions launch event on 12/09/09. Aaron Boodman and Erik Kay technical leads for the Google Chrome extensions team present a quick history of the extensions system of Google Chrome and discuss its design principles, focusing on why extensions are webby. From: GoogleDevelopers Views: 3035 12 ratings Time: 05:25 More in Science & Technology

    Read the article

  • Unhappy with performance of GBit Ethernet to Fiber converter

    - by Aaron Digulla
    I just bought a TP-Link MC200CM GBit Ethernet (1000-T) to Fiber (1000-SX) media converter. The device works but I'm unhappy with the performance: When connecting my computer over 1000-T (twisted pair, Cat 6, 18meters) with my server, I get a throughput of about 610MBit/s. If I replace the cable with two media converters, I'm left with about 310-315MBit/s (i.e. half the performance). My setup is like this: Computer <- GBit switch <- long cable <- GBit switch <- server Computer <- GBit switch <- MC200CM <- 30m fiber <- MC200CM <- GBit switch <- server Is there a way to improve the performance? Will another MC be faster? Or is that about as much as I can expect with the additional 2 converters?

    Read the article

  • Can't avoid starting macbook in safe mode

    - by Aaron Brown
    I recently spilled some water on my MacBook (mid-2010) keyboard and it shorted out several of the keys. Notably, control and left option don't work, and the system thinks that the left shift is permanently held down. I plugged in an external USB keyboard and all keys work fine; there's only one problem: The computer always starts in safe mode because the shift key is held down. I've tried holding down other keys (escape, space, c to name a few) and the control key doesn't work so I can't try that. I also tried KeyRemap4Macbook but it doesn't work in safe mode and it doesn't seem to help on startup for me. I can log in to Windows with no problems (with rEFIt) and I can browse the internet with no problems, but I can't program on the Mac OS side in safe mode (it's really slow). Which is mainly what I use this Macbook for. Any ideas out there on how to avoid starting in safe mode?

    Read the article

  • jump to page of a pdf in google docs / drive / apps

    - by Aaron - Solution Evangelist
    i want to jump to a specific page of a pdf file via the google docs via the editor url https://docs.google.com/file/d/xxx/edit or the embed url https://docs.google.com/file/d/xxx/preview i am not looking to use the http://docs.google.com/gview?url= referenced in the stackoverflow question how to open specific page on Google's docs viewer as i want to do this for documents where authentication is required the the document is not available via public url. is there some way of appending an anchor (i would have expected it to be https://docs.google.com/file/d/xxx/preview#10) or a query (e.g. https://docs.google.com/file/d/xxx/preview?page=10) to the google docs / drive / apps viewer?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >