Search Results

Search found 39 results on 2 pages for 'gregg leventhal'.

Page 1/2 | 1 2  | Next Page >

  • Brendan Gregg's "Systems Performance: Enterprise and the Cloud"

    - by user12608550
    Long ago, the prerequisite UNIX performance book was Adrian Cockcroft's 1994 classic, Sun Performance and Tuning: Sparc & Solaris, later updated in 1998 as Java and the Internet. As Solaris evolved to include the invaluable DTrace observability features, new essential performance references have been published, such as Solaris Performance and Tools: DTrace and MDB Techniques for Solaris 10 and OpenSolaris (2006)  by McDougal, Mauro, and Gregg, and DTrace: Dynamic Tracing in Oracle Solaris, Mac OS X and FreeBSD (2011), also by Mauro and Gregg. Much has occurred in Solaris Land since those books appeared, notably Oracle's acquisition of Sun Microsystems in 2010 and the demise of the OpenSolaris community. But operating system technologies have continued to improve markedly in recent years, driven by stunning advances in multicore processor architecture, virtualization, and the massive scalability requirements of cloud computing. A new performance reference was needed, and I eagerly waited for something that thoroughly covered modern, distributed computing performance issues from the ground up. Well, there's a new classic now, authored yet again by Brendan Gregg, former Solaris kernel engineer at Sun and now Lead Performance Engineer at Joyent. Systems Performance: Enterprise and the Cloud is a modern, very comprehensive guide to general system performance principles and practices, as well as a highly detailed reference for specific UNIX and Linux observability tools used to examine and diagnose operating system behaviour.  It provides thorough definitions of terms, explains performance diagnostic Best Practices and "Worst Practices" (called "anti-methods"), and covers key observability tools including DTrace, SystemTap, and all the traditional UNIX utilities like vmstat, ps, iostat, and many others. The book focuses on operating system performance principles and expands on these with respect to Linux (Ubuntu, Fedora, and CentOS are cited), and to Solaris and its derivatives [1]; it is not directed at any one OS so it is extremely useful as a broad performance reference. The author goes beyond the intricacies of performance analysis and shows how to interpret and visualize statistical information gathered from the observability tools.  It's often difficult to extract understanding from voluminous rows of text output, and techniques are provided to assist with summarizing, visualizing, and interpreting the performance data. Gregg includes myriad useful references from the system performance literature, including a "Who's Who" of contributors to this great body of diagnostic tools and methods. This outstanding book should be required reading for UNIX and Linux system administrators as well as anyone charged with diagnosing OS performance issues.  Moreover, the book can easily serve as a textbook for a graduate level course in operating systems [2]. [1] Solaris 11, of course, and Joyent's SmartOS (developed from OpenSolaris) [2] Gregg has taught system performance seminars for many years; I have also taught such courses...this book would be perfect for the OS component of an advanced CS curriculum.

    Read the article

  • New Book: "Systems Performance: Enterprise and the Cloud"

    - by uwes
    Brendan Gregg, former Solaris kernel engineer at Sun published his new book "Systems Performance: Enterprise and the Cloud" in October. The book is a modern, very comprehensive guide to general system performance principles and practices, as well as a highly detailed reference for specific UNIX and Linux observability tools used to examine and diagnose operating system behaviour. Read a more detailed abstract and review on Harry J Foxwell's Blog entry "Brendan Gregg's "Systems Performance: Enterprise and the Cloud"

    Read the article

  • How do you set the twitter user location in JTwitter?

    - by Gregg Reno
    I'm building some basic Twitter functionality into my app, and am using the JTwitter JAR to read and set status. It looks like there is a User class that can be used to set the location, but I just can't figure out how to set it with my GPS coordinates once I've got my Twitter object. Has anyone been able to set the user.location property using JTwitter? Thanks, -Gregg

    Read the article

  • Upgrading a PERC h310 to a PERC H710 mini RAID controller on a Dell R620

    - by Gregg Leventhal
    I have an ESXi 5.0 Free license host using an internal Datastore (RAID 5, 5 Disk) that was configured with a Dell PERC h310 RAID controller. The disk performance was very poor, so I upgraded to the PERC H710 Mini. The IT Tech installed the controller and powered the host back on. I had to rescan the controller and the datastore appeared. Should any settings be changed in the RAID BIOS, or should the default settings be sufficient? Is they anything to be aware of when performing this type of upgrade in order to achieve the maximum performance?

    Read the article

  • KVM Slow performance on XP Guest

    - by Gregg Leventhal
    The system is very slow to do anything, even browse a local folder, and CPU sits at 100% frequently. Guest is XP 32 bit. Host is Scientific Linux 6.2, Libvirt 0.10, Guest XP OS shows ACPI Multiprocessor HAL and a virtIO driver for NIC and SCSI. Installed. CPUInfo on host: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 42 model name : Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz stepping : 7 cpu MHz : 3200.000 cache size : 8192 KB physical id : 0 siblings : 8 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid bogomips : 6784.93 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <vcpu placement='static' cpuset='0'>1</vcpu> <os> <type arch='x86_64' machine='rhel6.3.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>SandyBridge</model> <vendor>Intel</vendor> <feature policy='require' name='vme'/> <feature policy='require' name='tm2'/> <feature policy='require' name='est'/> <feature policy='require' name='vmx'/> <feature policy='require' name='osxsave'/> <feature policy='require' name='smx'/> <feature policy='require' name='ss'/> <feature policy='require' name='ds'/> <feature policy='require' name='tsc-deadline'/> <feature policy='require' name='dtes64'/> <feature policy='require' name='ht'/> <feature policy='require' name='pbe'/> <feature policy='require' name='tm'/> <feature policy='require' name='pdcm'/> <feature policy='require' name='ds_cpl'/> <feature policy='require' name='xtpr'/> <feature policy='require' name='acpi'/> <feature policy='require' name='monitor'/> <feature policy='force' name='sse'/> <feature policy='force' name='sse2'/> <feature policy='force' name='sse4.1'/> <feature policy='force' name='sse4.2'/> <feature policy='force' name='ssse3'/> <feature policy='force' name='x2apic'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/lib/libvirt/images/Server-10-9-13.qcow2'/> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </disk>

    Read the article

  • puppet service not stopping service

    - by Gregg Leventhal
    notice ("This should be echoed") service { "iptables": ensure => "stopped", } This does not stop iptables, I am not sure why. service iptables stop works fine. Puppet 2.6.17 on CentOS 6.3. UPDATE: /etc/puppet/manifests/nodes.pp node 'linux-dev' { include mycompany::install::apache::init include mycompany::config::services::init } /etc/puppet/modules/mycompany/manifests/config/services/init.pp class mycompany::config::services::init { if ($::id == "root") { service { 'iptables': #name => '/sbin/iptables', #enable => false, #hasstatus => true, ensure => stopped } notice ("IPTABLES is now being stopped...") file { '/tmp/puppet_still_works': ensure => 'present', owner => root } else { err("Error: this manifest must be run as the root user!") } }

    Read the article

  • How to approach taking a very diverse hybrid network and making something lean and cohesive

    - by Gregg Leventhal
    I am going to have an opportunity (from the role of Linux Sysadmin) to work on optimizing a corporate server network that has a lot of different application servers from LAMP stacks to JBOSS to IIS based ASP/.NET systems of all sorts. I am interested to hear how you would approach evaluating and consolidating a network in a situation like this where you are walking in cold? What are some of your go-to techniques?

    Read the article

  • Slow Write Speed on ESXi host

    - by Gregg Leventhal
    I have an ESXi 5.0 free host with an internal datastore of 7.2K 5 disk RAID 5 using a PERC 710 mini RAID controller in a Dell Poweredge R620 Server with 32GB Ram and a 12 Core Xeon. I seem to get slow write speeds in the guests so I checked out ESXTOP and I see 15MB/s write speed there on this host, which is comprable to the guests. What could be causing such horrible write speeds? Is RAID 5 really this slow to write??

    Read the article

  • How do you manage updates without a staging environment: CentOS 6.3

    - by Gregg Leventhal
    I am managing about 20 servers, many of them virtual. They are almost all different purpose, and none are clustered. I have a distributed LAMP stack, a few application servers, some build servers, a few KVM hosts. They are CentOS 6.3 mostly with a few Ubuntu (unfortunately). I don't have the resources to setup a staging environment where I can have duplicates of my machines and test updates before rolling them out. I am taking file backups. What I want to know is how you are approaching backing up your Linux systems. I assume you don't just do yum update, but then how are you choosing the packages worthy of updating? When (if ever) are you updating the kernel, etc.. How do you test updates without a staging environment? Snapshot and hope for the best?

    Read the article

  • Lost Page Write I/O Errors on CentOS LVM setup

    - by Gregg Leventhal
    I have a CentOS 6 box with LVM setup and one of the PVs is a USB disk (I know). One of them is getting the error: Oct 30 10:57:07 alpha01 kernel: lost page write due to I/O error on dm-3 Oct 30 10:57:07 alpha01 kernel: Buffer I/O error on device dm-3, logical block 4 Which is causing problems with all of the LVs on it. pvs shows the PV as unknown device. I can ls to the logical volumes and they show up in lvdisplay, but first I get a bunch of IO errors. I made sure the cables are secure between the USB drive. What should I do to get this back up and running for the meanwhile? Should I unmount each LV and run an fsck.ext4 on each one like fsck.ext4 -y /dev/vg1/lv_logvolname ?

    Read the article

  • EPM and Business Analytics Talking-head Videos from Oracle OpenWorld 2013

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE Here is a selection of 2 to 3 minute video interviews at this year’s Oracle OpenWorld: 1. George Somogyi, Solutions Architect, New Edge Group, talks about the importance of having their integrated Oracle Hyperion Platform consisting of Oracle Hyperion Financial Management, Oracle Hyperion Financial Data Quality Management, Oracle E-Business Suite R12 and Oracle Business Intelligence Extended Edition plus their use of Oracle Managed Cloud Services. Speaker: George Somogyi @ http://youtu.be/kWn0dQxCUy8 2. Gregg Thompson, Director of Financial Systems for ADT, talks about using Oracle Data Relationship Management prior to implementing an Enterprise Performance Management solution. Gregg confirmed that there are big benefits to bringing the full Oracle Hyperion Financial Close suite online with Oracle DRM as the metadata source. Reduced maintenance time and use of external consultants translates into significant time and cost savings and faster implementation times. Speaker: Gregg Thompson @ http://youtu.be/XnFrR9Uk4xk 3. Jeff Spangler, Director Financial Planning and Analysis for Speedy Cash Holdings Corp, talked to us about the benefits achieved through implementing Oracle Hyperion Planning and financial reporting solutions. He also describes how the use of Data Relationship Management will keep the process running smoothly now and in the future. Speaker: Jeff Spangler @ http://youtu.be/kkkuMkgJ22U 4. Marc Seewald, Senior Director of Product Management for Oracle Hyperion Tax Provision at Oracle, talks about Oracle Hyperion Tax Provision, how it is an integral part of the financial close process and that it provides better internal controls and automation of this task. Marc talks about Oracle Partners and customers alike who are seeing great value. Speaker: Marc Seewald @ http://youtu.be/lM_nfvACGuA 5. Matt Bradley, SVP of Product Development for Enterprise Performance Management (EPM) Applications at Oracle, talked to us about different deployment options for Oracle EPM. Cloud services (SaaS), managed services, on-premise, off-premise all have their merits, and organizations need flexibility to easily move between them as their companies evolve. Speaker: Matt Bradley @ http://youtu.be/ATO7Z9dbE-o 6. Neil Sellers, Partner, Qubix International talks about their experience with previewing Oracle’s new Planning and Budgeting Cloud Service. He describes the benefits of the step-by-step task lists, the speed of getting the application up and running, and the huge benefits of not having to manage the software and hardware side of the planning process. Speaker: Neil Sellers @ http://youtu.be/xmosO28e4_I 7. Praveen Pasupuleti, Senior Business Intelligence Development Manager of Citrix Systems Inc., talks about their Oracle Hyperion Planning upgrade and the huge performance improvement now experienced in forecasting. He also talked about the benefits of Oracle Hyperion Workforce Planning achieved by Citrix. Speaker: Praveen Pasupuleti @ http://youtu.be/d1e_4hLqw8c 8. CheckPoint Consulting, talked to us about how Enterprise Performance Management should be viewed as an entire solution, rather than as a bunch of applications in silos, to provide significant benefits; and how Data Relationship Management can tie it all together effectively. Speaker: Ron Dimon @ http://youtu.be/sRwbdbbXvUE 9. Sonal Kulkarni, Enterprise Performance Management Leader, Cummins Inc., talks about their use of Oracle Hyperion Financial Close Management (Account Reconciliation Manager), Oracle Hyperion Financial Management and Oracle Hyperion Financial Data Quality Management and how this is providing efficiency, visibility and compliance benefits. Speaker: Sonal Kulkarni @ http://youtu.be/OEgup5dKyVc 10. Todd Renard, Manager Financial Planning and Business Analytics for B/E Aerospace Inc., talks about the huge benefits that B/E Aerospace is experiencing from Oracle Financial Close Suite. He was extremely excited about Oracle Hyperion Financial Data Quality Management and how this helps them integrate a new business in as little as three weeks. Speaker: Todd Renard @ http://youtu.be/nIfqK46uVI8 11. Peter Smolianski, Chief Technology Officer for the District of Columbia Courts, talked to us about how D.C. Courts is using Oracle Scorecard and Strategy Management to push their 5 year plan forward, to report results to their constituents, and take accountability for process changes to become more efficient. Speaker: Peter Smolianski @ http://www.youtube.com/watch?v=T-DtB5pl-uk 12. Rich Wilkie, Senior Director of Product Management for Financial Close Suite at Oracle, talked to us about Oracle Financial Management Analytics. He told us how the prebuilt dashboards on top of Oracle Hyperion Financial Close Suite make it easy for everyone to see the numbers and understand where they are in the close process, and if there is an issue, they can see where it is. Executives are excited to get this information on mobile devices too. Speaker: Rich Wilkie @ http://www.youtube.com/watch?v=4UHuHgx74Yg 13. Dinesh Balebail, Senior Director of Software Development for Oracle Hyperion Profitability and Cost Management, talked to us about the power and speed of Oracle Hyperion Profitability and Cost Management and how it is being used to do deep costing for Telecoms, Hospitals, Banks and other high transaction volume organizations effectively. Speaker: Dinesh Balebail @ http://youtu.be/ivx5AZCXAfs /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman"; mso-ansi-language:EN-US; mso-fareast-language:EN-US;}

    Read the article

  • PERC H710 mini raid controller advanced settings (BIOS)

    - by gregg
    I upgraded from a PERC h310 to an H710 controller on my Dell R620 but didnt get any increase in performance. This is a ESXi host with a 5 disk RAID 5. I noticed when going to the RAID BIOS that the advanced settings section was not activated/checked off. In that section is the strip element size: 64kb (default) read policy: no read ahead and the write policy: write-through. Will checking that section do any harm to the existing raid array or will it simply enable those policies and hopefully boost performance? Or, lastly, is it already using those policies and the checkmark is simply to activate them for changes

    Read the article

  • Unexpected "Connection timed out: proxy connect" lines in Apache error.log

    - by Gregg Lind
    I see some unexpected lines in my Apache (1.3!) error.log. What is happening here? My isp has complained in the past about proxying attempts... how do I check for them? [Sun Apr 4 16:43:32 2010] [error] [client 60.173.11.34] (110)Connection timed out: proxy connect to 61.132.221.146 port 80 failed [Sun Apr 4 16:44:11 2010] [error] [client 60.173.11.34] (110)Connection timed out: proxy connect to 61.132.221.146 port 80 failed [Sun Apr 4 16:45:34 2010] [error] [client 79.2.28.220] (110)Connection timed out: proxy connect to 203.212.171.170 port 80 failed (If more information would be useful, please ask me to clarify!)

    Read the article

  • How should I ask for help in getting my emails to stop bouncing?

    - by Gregg Williams
    For several months, people have been telling me that emails they sent to me have been bouncing back, marked as undeliverable. The bounce message would contain portions like this: Final-Recipient: rfc822;[email protected] Action: failed Status: 5.7.1 Diagnostic-Code: smtp;550 5.7.1 <[email protected]>... Recipient declines email from 69.64.159.2, <spamhaus-xbl>, Ref: http://www.spamhaus.org/query/bl?ip=69.64.159.2 Clicking the link on the last line, the destination page told me that "this IP address is infected with/emitting spamware/spamtrojan traffic and needs to be fixed." I could temporarily de-list this node by clicking a link on that page, but it would get back on the list and more emails to me to bounce. I own a domain, innerpaths.net, and I normally use [email protected] for my email. I have my domain registrar, namecheap.com, forward all email from innerpaths.net to the email account [email protected]. (BTW, I had this same problem at a former registrar. I changed registrars, hoping that would fix the problem. It didn't.) Trying to isolate the problem, I asked namecheap.com what I should do. Their answer, though substantial, left me scratching my head: We have received feedback from our upstream provider which informed us that the mail server that you are trying to email subscribes to a 3rd party blacklist service which they appear to be listed on at the present time and is causing destination mail server to reject the messages. Being blocked with one of these services can happen to anyone for many reasons and is something that is beyond our control. 3rd party blacklist services require companies whose mail servers they have blacklisted, pay fees in order to be removed from their lists. As we cannot pay fees to blacklist services which require them for removal, you should contact your email provider and have them whitelist our mail server IP address: 69.64.157.73. My best guess is that I should email my ISP, sonic.net, tell them what is going on and ask them to whitelist the IP address 69.64.157.73. (If not, please let me know.) But I want to know what is going on and how email works. I understand that there's a device at location 69.64.159.2 that is doing something bad that causes the "destination mail server [sonic.net's, I assume --gw] to reject the messages." I know that email is sent through multiple devices in a way that eventually gets it to its destination. Beyond that, here are my questions: 1) I thought the Internet "routed around damage." Why does email starting at namecheap.com always (or is it 'sometimes'?) go through 69.64.159.2? 2) Who is the "upstream provider" that the namecheap.com representative mentions, and what is their role? 3) How does having sonic.net's whitelisting namecheap.com's mail server prevent my email being bounced by 69.64.159.2? I've searched the Internet for answers but have found nothing useful. Thanks for whatever answers you can provide.

    Read the article

  • Added user to CentOS, Updated sshd_config with AllowUsers, Login denied

    - by Gregg
    CentOS 5.3. I can SSH into the system as root just fine. Added a user and set their password. They have shell access (/bin/bash). I can su to the account from root just fine. I updated /etc/ssh/sshd_config with: AllowUsers myNewUser And restarted sshd: /etc/init.d/sshd restart When trying to ssh into the server with the new user, I get a permission denied. And yes, I've double and triple checked that I am using the correct password. Any help is appreciated.

    Read the article

  • /etc/init.d/libvirtd start fails but service libvirtd start works. Why?

    - by Gregg
    CentOS 6.3, running as root (Shush). Can you please tell me why I would get initialisation failures from the init scripts but the service command works a treat? There was nothing in /var/log/messages or /var/log/libvirt/* all I have it the Terminal output: /etc/init.d/libvirtd start Starting libvirtd daemon: libvirtd: initialization failed [FAILED] I changed the libvirtd logging level to 1, the highest, but saw nothing in messages after another failure.

    Read the article

  • The Jack LaLanne School of Sysadmins

    - by rickramsey
    Two of my childhood heroes were Tarzan and Jack LaLanne. Tarzan was an obvious choice: what boy wouldn't want to spend his days bungee jumping through the jungle with his own pack of gorillas? Jack Lalanne had a disturbing habit of wearing stretch pants, but he was so damn fit for an old guy that you couldn't help but be impressed. Especially back then, when nobody knew what a dumb bell was, much less Cross-Fit. Here's what he did to celebrate his 70th birthday. Sooner or later we all face a choice in our careers: surrender to the life of a has-been like Bruce Sprinsteen's baseball player or become an unstoppable sysadmin like Jack Lalanne. If you'd rather keep on fighting like Jack, give these resources a look. Brian Bream's blog provides specific suggestions for keeping your skills up to date. The video interviews describe the types of technologies that are challenging what you used to know. Blog: The Old School Sysadmin - A Dying Breed? by Brian Bream "The sysadmin role has been far too dependent on performing repetitive tasks and working in a reactionary mode ... the sysadmin must grow a much larger skill set to be successful. Don’t grow vertically in one technology, grow horizontally amongst many technologies." Just one of the suggestions Brian Bream provides in this excellent blog post. Video: Freeing the Sysadmin From Repetitive Tasks Interview with Marshall Choy Marshall Choy, Director of Optimized Solutions at Oracle was once a sysadmin. And a Solaris engineer. He explains what optimized solutions are, how they are developed and tested, how they handle patching, and how these vertically integrated systems impact the job and duties of a sysadmin. Video: The Oracle Database Appliance Interview with Bob Thome Bob Thome, Senior Director of Product Management, explains what makes the Database Appliance simple, reliable, and affordable, and how it could change the economies and processes of the data center. Video: Why Pinellas County Chose Oracle Exalytics Interview with Gautham Gautham (pronounced like Batman's Gotham) recently led an effort to refresh the Pinellas County hardware systems. He'll explain what they were looking for, why they chose Oracle Exalytics, how they became convinced it was the right decision, and how it changed the way they managed their data center. Video: DTrace for System Administrators Interview with Brendan Gregg This video interview will give you an idea of some of the value-add tasks you can perform when you are freed from the reactive mode that Brian Bream describes in his blog. Brendan Gregg describes the best ways for sysadmins to tune deployed applications to get more performance out of them in their particular computing environment photograph of Ford Mustang GT 500 taken at Gateway Museum copyright by Rick Ramsey -Rick Follow me on: Blog | Facebook | Twitter | Personal Twitter | YouTube | The Great Peruvian Novel

    Read the article

  • UPK for Testing Webinar Recording Now Available!

    - by Karen Rihs
    For anyone who missed last week’s event, a recording of the UPK for Testing webinar is now available.  As an implementation and enablement tool, Oracle’s User Productivity Kit (UPK) provides value throughout the software lifecycle.  Application testing is one area where customers like Northern Illinois University (NIU) are finding huge value in UPK and are using it to validate their systems.  Hear Beth Renstrom, UPK Product Manager and Bettylynne Gregg, NIU ERP Coordinator, discuss how the Test It Mode, Test Scripts, and Test Cases of UPK can be used to facilitate applications testing.

    Read the article

  • HgWebDir push permission denied error

    - by Gregg
    I have a new Fedora 12 server that I am attempting to set up Mercurial on. I have yum installed mercurial, and most things seem to work fine. However, after setting up hgwebdir.cgi through apache, I am unable to do an hg push to the only repo currently being hosted. The error I get back is: searching for changes abort: HTTP Error 500: Permission denied: .hg/store/lock httpd is running as user apache UID PID PPID C STIME TTY TIME CMD root 1691 1 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1694 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1695 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1696 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1697 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1698 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1699 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1700 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd apache 1701 1691 0 13:19 ? 00:00:00 /usr/sbin/httpd and I set permissions so that the apache user owns the whole repo and everything. In a last ditch attempt, I even made the repo globally writable. [root@builds .hg]# ll total 424K drwxrwxrwx. 3 apache apache 4.0K 2010-04-19 14:43 . drwxrwxrwx. 19 apache apache 4.0K 2010-04-15 13:33 .. -rw-rw-rw-. 2 apache apache 57 2010-04-13 11:42 00changelog.i -rw-rw-rw-. 1 apache apache 93 2010-04-16 15:33 branchheads.cache -rw-rw-rw-. 1 apache apache 192K 2010-04-15 13:33 dirstate -rw-r--r--. 1 apache apache 156 2010-04-19 14:43 hgrc -rw-rw-rw-. 1 apache apache 42 2010-04-15 13:33 last-message.txt -rw-rw-rw-. 2 apache apache 23 2010-04-13 11:42 requires drwxrwxrwx. 4 apache apache 4.0K 2010-04-19 11:26 store -rw-rw-rw-. 1 apache apache 45 2010-04-14 14:08 tags.cache -rw-rw-rw-. 1 apache apache 7 2010-04-16 15:33 undo.branch -rw-rw-rw-. 1 apache apache 192K 2010-04-16 15:33 undo.dirstate [root@builds .hg]# cd store [root@builds store]# ll total 308K drwxrwxrwx. 4 apache apache 4.0K 2010-04-19 11:26 . drwxrwxrwx. 3 apache apache 4.0K 2010-04-19 14:43 .. -rw-rw-rw-. 1 apache apache 20K 2010-04-16 15:33 00changelog.i -rw-rw-rw-. 1 apache apache 81K 2010-04-16 15:33 00manifest.i drwxrwxrwx. 17 apache apache 4.0K 2010-04-13 11:47 data drwxrwxrwx. 3 apache apache 4.0K 2010-04-13 11:43 dh -rw-rw-rw-. 2 apache apache 177K 2010-04-15 11:03 fncache -rw-rw-rw-. 1 apache apache 67 2010-04-16 15:33 undo I have a clone of the repo elsewhere on the machine running as a different user. If I set the the default value in the [paths] section of the clones hgrc file to the local filepath on the server, the push works fine, but if I switch it to use the url, I get the error every time. Some possible quirks in how I've set this up... hgwebdir.cgi is sitting in /var/www/cgi-bin and the repo is a child of /opt/hg. I turned off suexec as well, and this doesn't seem to clear up the issue. The only line I added in the apache config to get hgwebdir running is: ScriptAlias /hg "/var/www/cgi-bin/hgwebdir.cgi" The hgweb.config is also in /var/www/cgi-bin and it's contents are: [collections] /opt/hg = /opt/hg [trusted] users = * [web] baseurl = /hg push_ssl = false allow_push = * The repo browser is working fine, it's just push that doesn't work. Apache error_log doesn't have anything in about this error at all.

    Read the article

  • Latent Dirichlet Allocation, pitfalls, tips and programs

    - by Gregg Lind
    I'm experimenting with Latent Dirichlet Allocation for topic disambiguation and assignment, and I'm looking for advice. Which program is the "best", where best is some combination of easiest to use, best prior estimation, fast How do I incorporate my intuitions about topicality. Let's say I think I know that some items in the corpus are really in the same category, like all articles by the same author. Can I add that into the analysis? Any unexpected pitfalls or tips I should know before embarking? I'd prefer is there are R or Python front ends for whatever program, but I expect (and accept) that I'll be dealing with C.

    Read the article

  • jets3t and Downloading Files from AmazonS3 with Different Name

    - by Gregg
    We're using AmazonS3 for file storage and recently found out that we need to keep some sort of directory structure. Since S3 doesn't allow that, we know we can name the files according to their structure for storage. For example... abc/123/draft.doc What I want to know is if I want to provide a public link to this particular file is there anyway that the file can simply be draft.doc instead of abc/123/draft.doc ?

    Read the article

  • Recover history from foolish git-svn merge

    - by Gregg Lind
    the players: master: the svn branch (actual, not local trackign) mybranch: a local branch My mistake: [master] git svn rebase [master] git merge mybranch [master] git svn dcommit I did this twice. Is there a way I can remedy all this? I was thinking something like: git checkout --hard [commit before the merging] git dcommit # that to the svn? git rebase mybranch git dcommit But this doesn't seem to work. (I know I should a. working from a local tracking branch and b. have rebased rather than merged) I'm in the frantic / willing to send beer to respondents stage :)

    Read the article

  • Xml comparison in Python

    - by Gregg Lind
    Building on another SO question, how can one check whether two well-formed XML snippets are semantically equal. All I need is "equal" or not, since I'm using this for unit tests. In the system I want, these would be equal (note the order of 'start' and 'end'): <?xml version='1.0' encoding='utf-8' standalone='yes'?> <Stats start="1275955200" end="1276041599"> </Stats> # Reodered start and end <?xml version='1.0' encoding='utf-8' standalone='yes'?> <Stats end="1276041599" start="1275955200" > </Stats> I have lmxl and other tools at my disposal, and a simple function that only allows reordering of attributes would work fine as well!

    Read the article

  • Modern, Non-trivial, Pygame Tutorials?

    - by Gregg Lind
    What are some 'good', non-trivial Pygame tutorials? I realize good is relative. As an example, a good one (to me) is the one that describes how to use pygame.camera. It's recent uses a modern PyGame (1.9) non-trivial, in that it shows how to use it the module for a real application. I'd like to find others. A lot of the ones on the Pygame site are from 1.3 era or earlier! Info on related projects, like Gloss is welcome as well. (If your answer is "read the source of some pygame games", please link to the source of particular ones and note what is good about them)

    Read the article

  • Include at text file as is in (Python) Sphinx Docs

    - by Gregg Lind
    (using Python-Sphinx Documentation tool) I have a .txt log file I'd like to build into _build/html unaltered. What do I need to alter in conf.py, index.rst, etc. Here is the layout: src/ index.rst some_doc.rst somefile.txt How do I get somefile.txt into the html build? I tried adding a line like this to index.rst: Contents: .. toctree:: :maxdepth: 2 some_doc "somefile.txt" hoping it would work by magic, but no magic here! Assuming this is even possible, what would I put in some_doc.rst to refer/link to that file? Note Yes, I'm aware I could put it in /static and just be done with it, but that seems like a total hack, and ugly.

    Read the article

1 2  | Next Page >