Search Results

Search found 46406 results on 1857 pages for 'super tester test123'.

Page 29/1857 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • TFS 2010–Bridging the gap between developers and testers

    - by guybarrette
    Last fall, the Montreal .NET Community presented a full day on ALM with a session called “Bridging the gap between developers and testers”. It was a huge success. TFS experts Etienne Tremblay and Vincent Grondin presented again this session at the Ottawa user group in January and this time, the event was recorded by DevTeach in collaboration with Microsoft.  This 7 hours training is broken in 13 videos that you can watch online for free on the DevTeach Website.  If you’re interested in TFS, how to migrate from VSS, the TFS testing tools, how to set the TFS testing lab, how to test a UI and how to automate the tests, this is a must see series.   Here’s the segments list: Intro Migrating from VSS to TFS Automating the build Where’s our backlog? Adding a tester to the team Tester at work Bridging the gap Stop, we have a problem! Let’s get back on track Multi-environment testing Testing in the lab UI Automation Validating UI automation Look boss, no hands! http://www.devteach.com/ALM-TFS2010-Bridgingthegap.aspx var addthis_pub="guybarrette";

    Read the article

  • Detecting Process Shutdown/Startup Events through ActivationAgents

    - by Ramkumar Menon
    @10g - This post is motivated by one of my close friends and colleague - who wanted to proactively know when a BPEL process shuts down/re-activates. This typically happens when you have a BPEL Process that has an inbound polling adapter, when the adapter loses connectivity to the source system. Or whatever causes it. One valuable suggestion came in from one of my colleagues - he suggested I write my own ActivationAgent to do the job. Well, it really worked. Here is a sample ActivationAgent that you can use. There are few methods you need to override from BaseActivationAgent, and you are on your way to receiving notifications/what not, whenever the shutdown/startup events occur. In the example below, I am retrieving the emailAddress property [that is specified in your bpel.xml activationAgent section] and use that to send out an email notification on the activation agent initialization. You could choose to do different things. But bottomline is that you can use the below-mentioned API to access the very same properties that you specify in the bpel.xml. package com.adapter.custom.activation; import com.collaxa.cube.activation.BaseActivationAgent; import com.collaxa.cube.engine.ICubeContext; import com.oracle.bpel.client.BPELProcessId; import java.util.Date; import java.util.Properties; public class LifecycleManagerActivationAgent extends BaseActivationAgent { public BPELProcessId getBPELProcessId() { return super.getBPELProcessId(); } private void handleInit() throws Exception { //Write initialization code here System.err.println("Entered initialization code...."); //e.g. String emailAddress = getActivationAgentDescriptor().getPropertyValue(emailAddress); //send an email sendEmail(emailAddress); } private void handleLoad() throws Exception { //Write load code here System.err.println("Entered load code...."); } private void handleUnload() throws Exception { //Write unload code here System.err.println("Entered unload code...."); } private void handleUninit() throws Exception { //Write uninitialization code here System.err.println("Entered uninitialization code...."); } public void init(ICubeContext icubecontext) throws Exception { super.init(icubecontext); System.err.println("Initializing LifecycleManager Activation Agent ....."); handleInit(); } public void unload(ICubeContext icubecontext) throws Exception { super.unload(icubecontext); System.err.println("Unloading LifecycleManager Activation Agent ....."); handleUnload(); } public void uninit(ICubeContext icubecontext) throws Exception{ super.uninit(icubecontext); System.err.println("Uninitializing LifecycleManager Activation Agent ....."); handleUninit(); } public String getName() { return "Lifecyclemanageractivationagent"; } public void onStateChanged(int i, ICubeContext iCubeContext) { } public void onLifeCycleChanged(int i, ICubeContext iCubeContext) { } public void onUndeployed(ICubeContext iCubeContext) { } public void onServerShutdown() { } } Once you compile this code, generate a jar file and ensure you add it to the server startup classpath. The library is ready for use after the server restarts. To use this activationAgent, add an additional activationAgent entry in the bpel.xml for the BPEL Process that you wish to monitor. After you deploy the process, the ActivationAgent object will be called back whenever the events mentioned in the overridden methods are raised. [init(), load(), unload(), uninit()]. Subsequently, your custom code is executed. Sample bpel.xml illustrating activationAgent definition and property definition. <?xml version="1.0" encoding="UTF-8"? <BPELSuitcase timestamp="1291943469921" revision="1.0" <BPELProcess wsdlPort="{http://xmlns.oracle.com/BPELTest}BPELTestPort" src="BPELTest.bpel" wsdlService="{http://xmlns.oracle.com/BPELTest}BPELTest" id="BPELTest" <partnerLinkBindings <partnerLinkBinding name="client" <property name="wsdlLocation"BPELTest.wsdl</property </partnerLinkBinding <partnerLinkBinding name="test" <property name="wsdlLocation"test.wsdl</property </partnerLinkBinding </partnerLinkBindings <activationAgents <activationAgent className="oracle.tip.adapter.fw.agent.jca.JCAActivationAgent" partnerLink="test" <property name="portType"Read_ptt</property </activationAgent <activationAgent className="com.oracle.bpel.activation.LifecycleManagerActivationAgent" partnerLink="test" <property name="emailAddress"[email protected]</property </activationAgent </activationAgents </BPELProcess </BPELSuitcase em

    Read the article

  • Bridging the gap between developers and testers with VS 2010

    - by Etienne Tremblay
    Hey everyone, I know it’s been an eternity since I blogged but I have so much to do that I unfortunately need to prioritize.  Vincent Grondin and I did a 7h presentation on the new developer and tester tools available in the VS 2010 suite.  It was a blast.  We did it in front of an audience (around 120) and it was taped.  We did it as a play and really didn’t look at the crowd at all we were training each other on the technology. It is now available for anyone that would like to watch it at this location: http://www.devteach.com/ALM-TFS2010-Bridgingthegap.aspx What we covered in the full day event was Migration to TFS 2010 (10h00) 1-Migration of VSS to TFS (20 min.) 2-Automating the Build (Something you can't do with VSS) ( 20 Min.) 3-User story (Real application context for this presentation) (20 min.) 10h00 Pause Manuel Tests by Dev ( 11h30) 4-Adding a tester to the team (Into to MTM) (20 min.) 5-Define tests (what is a white bug) (20 min.) 6-Fix the bug and show Intellitrace and Play back the test (20 min.) 12h15 Lunch Manuel testing for maintenance (13h30) 7- Implement new Feature (web service) and Identify bug with MTM and branch for a production fix and also add a new Build script (20 min.) 8- Fix bug in production branch, Playback tests, merge the change in main branch (20 min.) Manuel testing with the lab manager (14h30) 9- Intro to Lab manager and environment (20 min.) 10- Change build script to deploy to lab and test with web service in lab environment. (20 min.) 15h15 Pause Automate UI test with CodeUI (15h30) 11- Reducing the effort of testing the UI (20 min.) 12- Repeating testing to make sure the application is working properly (20 min.) 13- Automate Coded UI with the Lab environment (20 min.) 16h30 Conclusions As you can see lots of stuff!! Enjoy the show and let us know how you like it Cheers, ET Technorati Tags: VS 2010,Testing Tools,ALM,Training

    Read the article

  • unit level testing, agile, and refactoring

    - by dsollen
    I'm working on a very agile development system, a small number of people with my doing the vast majority of progaming myself. I've gotten to the testing phase and find myself writing mostly functional level testing, which I should in theory be leavning for our tester (in practice I don't entirely...trust our tester to detect and identify defects enough to leave him the sole writter of functional tests). In theory what I should be writing is Unit level tests. However, I'm not sure it's worth the expense. Unit testing takes some time to do, more then functional testing since I have to set up mocks and plugs into smaller units that weren't design to run in issolation. More importantly, I find I refactor and redesign heavily-part of this is due to my inherriting code that needed heavy redesign and is still being cleaned up, but even once I've finished removing parts that need work I'm sure in the act of expanding the code I'll still do a decent amount of refactoring and redesign. It feels as if I will break my unit tests, forcing wasted time to refactor them as well, often due to unit test, by definition, having to be coupled so closely to the code structure. So.is it worth all the wasted time when functional tests, that will never break when I refactor/redesign, should find most defects? Do unit tests really provide that much extra defect detetection over through functional? and how does one create good unit tests that work with very quick and agile code that is modified rapidly? ps, I would be fine/happy with links to anything one considers an excellent resource for how to 'do' unit testing in a highly changing enviroment. edit: to clarify I am doing a bit of very unoffical TDD, I just seem to be writing tests on what would be considered a functional level rather then unit level. I think part of this is becaus I own nearly all of the project I don't feel I need to limit the scope as much; and part of it is that it's daunting to think of trying to go back and retroactively add the unit tests needed to cover enough code that I can feel comfortable testing only a unit without the full functionality and trust that unit still works with the rest of the units.

    Read the article

  • PHP: Symlink in public_html cannot be accessed through browser

    - by Rachel
    I have tester.php file which I want to run on the browser and I have created symlink to it in my public_html folder, but still when I try to run it, its not working and gives me following error message. Access forbidden! You don't have permission to access the requested object. It is either read-protected or not readable by the server. If you think this is a server error, please contact the webmaster. Error 403 web.upc03.dev.com Sun Apr 4 22:41:23 2010 Apache I am not sure as to why am I getting this error message, I have check all file permissions settings and it seems to be fine. My File permissions settings are: lrwxrwxrwx for tester.php Is there something that should be done other way or is this not the proper approach ?

    Read the article

  • Welcome to the Red Gate BI Tools Team blog!

    - by BI Tools Team
    Welcome to the first ever post on the brand new Red Gate Business Intelligence Tools Team blog! About the team Nick Sutherland (product manager): After many years as a software developer and project manager, Nick took an MBA and turned to product marketing. SSAS Compare is his second lean startup product (the first being SQL Connect). Follow him on Twitter. David Pond (developer): Before he joined Red Gate in 2011, David made monitoring systems for Goodyear. Follow him on Twitter. Jonathan Watts (tester): Jonathan became a tester after finishing his media degree and joining Xerox. He joined Red Gate in 2004. Follow him on Twitter. James Duffy (technical author): After a spell as a writer in the video game industry, James lived briefly in Tokyo before returning to the UK to start at Red Gate. What we're working on We launched a beta of our first tool, SSAS Compare, last month. It works like SQL Compare but for SSAS cubes, letting you deploy just the changes you want. It's completely free (for now), so check it out. We're still working on it, and we're eager to hear what you think. We hope SSAS Compare will be the first of several tools Red Gate develops for BI professionals, so keep an eye out for more from us in the future. Why we need you This is your chance to help influence the course of SSAS Compare and our future BI tools. If you're a business intelligence specialist, we want to hear about the problems you face so we can build tools that solve them. What do you want to see? Tell us! We'll be posting more about SSAS Compare, business intelligence and our journey into BI in the coming days and weeks. Stay tuned!

    Read the article

  • Welcome to the Red Gate BI Tools Team blog!

    - by Red Gate Software BI Tools Team
    Welcome to the first ever post on the brand new Red Gate Business Intelligence Tools Team blog! About the team Nick Sutherland (product manager): After many years as a software developer and project manager, Nick took an MBA and turned to product marketing. SSAS Compare is his second lean startup product (the first being SQL Connect). Follow him on Twitter. David Pond (developer): Before he joined Red Gate in 2011, David made monitoring systems for Goodyear. Follow him on Twitter. Jonathan Watts (tester): Jonathan became a tester after finishing his media degree and joining Xerox. He joined Red Gate in 2004. Follow him on Twitter. James Duffy (technical author): After a spell as a writer in the video game industry, James lived briefly in Tokyo before returning to the UK to start at Red Gate. What we’re working on We launched a beta of our first tool, SSAS Compare, last month. It works like SQL Compare but for SSAS cubes, letting you deploy just the changes you want. It’s completely free (for now), so check it out. We’re still working on it, and we’re eager to hear what you think. We hope SSAS Compare will be the first of several tools Red Gate develops for BI professionals, so keep an eye out for more from us in the future. Why we need you This is your chance to help influence the course of SSAS Compare and our future BI tools. If you’re a business intelligence specialist, we want to hear about the problems you face so we can build tools that solve them. What do you want to see? Tell us! We’ll be posting more about SSAS Compare, business intelligence and our journey into BI in the coming days and weeks. Stay tuned!

    Read the article

  • Showing content from pages at different URL's (masking), possibly with .htaccess

    - by zigojacko
    If I have URL's like:- domain.com/category/widgets/filter/blue domain.com/category/widgets/filter/red And it is pretty difficult to reconstruct them to something like:- domain.com/category/blue-widgets domain.com/category/red-widgets Is there any way at all that I can use URL rewrites or anything else with .htaccess or on the server to display the URL's as the domain.com/category/blue-widgets on the domain.com/category/widgets/filter/blue page? I've looked into masking URL's but got nowhere and this has been something bugging me for almost 6 months now. Is there any way to achieve what I want to do? FYI: This is a Magento website and the above process, I am wanting to implement for potentially hundreds of URL's. Edit To respond to @kkugelmann's answer:- I couldn't get your proposed RewriteRule to make a difference at all in the .htaccess file so I started testing a few things in this .htaccess tester:- The proposed RewriteRule didn't work in this tester:- However, the following did:- But adding any of these RewriteRule's into the website's .htaccess file did not rewrite the URL at all... Edit2 By the way, if I add [R=301,L] to the end of the URL rewrite rule, it does actually then rewrite the rule, but of course 301 redirects it as well which is unwanted behaviour. Edit3 I found another question with the same issue... And an accepted answer that solved the problem which seemed to be something to do with using mod_proxy and the [P] tag on the rule (if I try this, the page 404's).

    Read the article

  • Welcome to the Red Gate BI Tools Team blog!

    - by BI Tools Team
    Welcome to the first ever post on the brand new Red Gate Business Intelligence Tools Team blog! About the team Nick Sutherland (product manager): After many years as a software developer and project manager, Nick took an MBA and turned to product marketing. SSAS Compare is his second lean startup product (the first being SQL Connect). Follow him on Twitter. David Pond (developer): Before he joined Red Gate in 2011, David made monitoring systems for Goodyear. Follow him on Twitter. Jonathan Watts (tester): Jonathan became a tester after finishing his media degree and joining Xerox. He joined Red Gate in 2004. Follow him on Twitter. James Duffy (technical author): After a spell as a writer in the video game industry, James lived briefly in Tokyo before returning to the UK to start at Red Gate. What we're working on We launched a beta of our first tool, SSAS Compare, last month. It works like SQL Compare but for SSAS cubes, letting you deploy just the changes you want. It's completely free (for now), so check it out. We're still working on it, and we're eager to hear what you think. We hope SSAS Compare will be the first of several tools Red Gate develops for BI professionals, so keep an eye out for more from us in the future. Why we need you This is your chance to help influence the course of SSAS Compare and our future BI tools. If you're a business intelligence specialist, we want to hear about the problems you face so we can build tools that solve them. What do you want to see? Tell us! We'll be posting more about SSAS Compare, business intelligence and our journey into BI in the coming days and weeks. Stay tuned!

    Read the article

  • mdadm: Win7-install created a boot partition on one of my RAID6 drives. How to rebuild?

    - by EXIT_FAILURE
    My problem happened when I attempted to install Windows 7 on it's own SSD. The Linux OS I used which has knowledge of the software RAID system is on a SSD that I disconnected prior to the install. This was so that windows (or I) wouldn't inadvertently mess it up. However, and in retrospect, foolishly, I left the RAID disks connected, thinking that windows wouldn't be so ridiculous as to mess with a HDD that it sees as just unallocated space. Boy was I wrong! After copying over the installation files to the SSD (as expected and desired), it also created an ntfs partition on one of the RAID disks. Both unexpected and totally undesired! . I changed out the SSDs again, and booted up in linux. mdadm didn't seem to have any problem assembling the array as before, but if I tried to mount the array, I got the error message: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I then used qparted to delete the newly created ntfs partition on /dev/sdd so that it matched the other three /dev/sd{b,c,e}, and requested a resync of my array with echo repair > /sys/block/md0/md/sync_action This took around 4 hours, and upon completion, dmesg reports: md: md0: requested-resync done. A bit brief after a 4-hour task, though I'm unsure as to where other log files exist (I also seem to have messed up my sendmail configuration). In any case: No change reported according to mdadm, everything checks out. mdadm -D /dev/md0 still reports: Version : 1.2 Creation Time : Wed May 23 22:18:45 2012 Raid Level : raid6 Array Size : 3907026848 (3726.03 GiB 4000.80 GB) Used Dev Size : 1953513424 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon May 26 12:41:58 2014 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 4K Name : okamilinkun:0 UUID : 0c97ebf3:098864d8:126f44e3:e4337102 Events : 423 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde Trying to mount it still reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I'm a bit unsure where to proceed from here, and trying stuff "to see if it works" is a bit too risky for me. This is what I suggest I should attempt to do: Tell mdadm that /dev/sdd (the one that windows wrote into) isn't reliable anymore, pretend it is newly re-introduced to the array, and reconstruct its content based on the other three drives. I also could be totally wrong in my assumptions, that the creation of the ntfs partition on /dev/sdd and subsequent deletion has changed something that cannot be fixed this way. My question: Help, what should I do? If I should do what I suggested , how do I do that? From reading documentation, etc, I would think maybe: mdadm --manage /dev/md0 --set-faulty /dev/sdd mdadm --manage /dev/md0 --remove /dev/sdd mdadm --manage /dev/md0 --re-add /dev/sdd However, the documentation examples suggest /dev/sdd1, which seems strange to me, as there is no partition there as far as linux is concerned, just unallocated space. Maybe these commands won't work without. Maybe it makes sense to mirror the partition table of one of the other raid devices that weren't touched, before --re-add. Something like: sfdisk -d /dev/sdb | sfdisk /dev/sdd Bonus question: Why would the Windows 7 installation do something so st...potentially dangerous? Update I went ahead and marked /dev/sdd as faulty, and removed it (not physically) from the array: # mdadm --manage /dev/md0 --set-faulty /dev/sdd # mdadm --manage /dev/md0 --remove /dev/sdd However, attempting to --re-add was disallowed: # mdadm --manage /dev/md0 --re-add /dev/sdd mdadm: --re-add for /dev/sdd to /dev/md0 is not possible --add, was fine. # mdadm --manage /dev/md0 --add /dev/sdd mdadm -D /dev/md0 now reports the state as clean, degraded, recovering, and /dev/sdd as spare rebuilding. /proc/mdstat shows the recovery progress: md0 : active raid6 sdd[4] sdc[1] sde[3] sdb[0] 3907026848 blocks super 1.2 level 6, 4k chunk, algorithm 2 [4/3] [UU_U] [>....................] recovery = 2.1% (42887780/1953513424) finish=348.7min speed=91297K/sec nmon also shows expected output: ¦sdb 0% 87.3 0.0| > |¦ ¦sdc 71% 109.1 0.0|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR > |¦ ¦sdd 40% 0.0 87.3|WWWWWWWWWWWWWWWWWWWW > |¦ ¦sde 0% 87.3 0.0|> || It looks good so far. Crossing my fingers for another five+ hours :) Update 2 The recovery of /dev/sdd finished, with dmesg output: [44972.599552] md: md0: recovery done. [44972.682811] RAID conf printout: [44972.682815] --- level:6 rd:4 wd:4 [44972.682817] disk 0, o:1, dev:sdb [44972.682819] disk 1, o:1, dev:sdc [44972.682820] disk 2, o:1, dev:sdd [44972.682821] disk 3, o:1, dev:sde Attempting mount /dev/md0 reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so And on dmesg: [44984.159908] EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! [44984.159912] EXT4-fs (md0): group descriptors corrupted! I'm not sure what do do now. Suggestions? Output of dumpe2fs /dev/md0: dumpe2fs 1.42.8 (20-Jun-2013) Filesystem volume name: Atlas Last mounted on: /mnt/atlas Filesystem UUID: e7bfb6a4-c907-4aa0-9b55-9528817bfd70 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 244195328 Block count: 976756712 Reserved block count: 48837835 Free blocks: 92000180 Free inodes: 243414877 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 791 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stripe width: 2 Flex block group size: 16 Filesystem created: Thu May 24 07:22:41 2012 Last mount time: Sun May 25 23:44:38 2014 Last write time: Sun May 25 23:46:42 2014 Mount count: 341 Maximum mount count: -1 Last checked: Thu May 24 07:22:41 2012 Check interval: 0 (<none>) Lifetime writes: 4357 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: e177a374-0b90-4eaa-b78f-d734aae13051 Journal backup: inode blocks dumpe2fs: Corrupt extent header while reading journal super block

    Read the article

  • How to fix unstable RJ45 jacks? [closed]

    - by BeemerGuy
    This is a little project I'm doing at home. I wanted to wire two rooms together (basically, the router is one room, and the switch is in the second room). So I ran a CAT5 between the two rooms, and wired an RJ45 jack in each room. I then hooked up the two jacks with two CAT5 cable to run it through the cable tester, and all 8 wires seem good. Now, when I connect the switch and the router, the connection is unstable -- I ping the router and it barely holds on for two pings before it disconnects, and stays in that unstable state. Just to make sure the router and the switch are ok, I connected them with long wire between the two rooms and the connection is absolutely stable, and pings continuously. What could be the cause for the unstable connection? Especially that it pings a few times, so there IS a connection. But why is it unstable? And how come the cable tester says it's ok, but it's unstable?

    Read the article

  • Developing and implementing a testing plan for a software app deployed on a web server

    - by Abhzoo
    A company in the USA is building a new Web App that will be offered SaaS to customers and the development is being done by a software development team located in a different country(India). They are about to take delivery of a first demo to provide live feedback to the team in India. The overseas team requires a cloud server (Windows + SQL Standard, 8GB Ram, 8 vCPUs, 40GB SSD system disk, 80GB SSD data disk, 1600Mb/s network bandwidth) to serve as a tester server. When the tester is setup the team will install the app on the test server to get live feedback. Q:Explain in detail how you will develop and implement a testing plan for the software App. Be sure to explain the specifics. PLEASE HELP, NEED ANSWER ASAP

    Read the article

  • How do I start nginx on port 80 at OS X login?

    - by Bryson
    I installed Nginx using homebrew and after completing the installation the following message was displayed: In the interest of allowing you to run `nginx` without `sudo`, the default port is set to localhost:8080. If you want to host pages on your local machine to the public, you should change that to localhost:80, and run `sudo nginx`. You'll need to turn off any other web servers running port 80, of course. You can start nginx automatically on login running as your user with: mkdir -p ~/Library/LaunchAgents cp #{prefix}/org.nginx.nginx.plist ~/Library/LaunchAgents/ launchctl load -w ~/Library/LaunchAgents/org.nginx.nginx.plist Though note that if running as your user, the launch agent will fail if you try to use a port below 1024 (such as http's default of 80.) But I want Nginx, on port 80, running at login and I don't want to have to open terminal and type in sudo nginx to do it. I want it to load from a plist file like Redis and PostgreSQL do. I moved the plist to /Library/LaunchAgents/ from the user folder equivalent and changed its ownership, also tried setting the user directive in the nginx.conf file and still the same error message in Console.app: nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) (along with another message telling me that since nginx was being run without super-user privileges, the user directive was being ignored)

    Read the article

  • Use RAID for desktop computer?

    - by NickAldwin
    I'm building a new computer over the summer. I'm fairly competent in computer hardware, and am thus building the computer from scratch. I have everything planned out, but I was wondering if I should consider RAID/what RAID to use. I plan to purchase 2x1TB drives. Currently I'm leaning toward RAID 1 for the redundancy -- I've heard newer super-capacity drives fail more often than one would think, and I don't want to have a problem and lose all my data. What do you think? My mobo supports RAID 0/1/5/10. Is it worth it to use RAID at all, or should I just use a backup service like Mozy? Should I consider RAID 0 instead, for the performance? I'm kind-of going back and forth on this one. Thanks a lot for your help. EDIT: I'd like to avoid the OS drive different from Data drive situation, because that can get frustrating when programs like to store a lot of data in their program files folder. I've lived with that situation before and it gets annoying after a while.

    Read the article

  • setting up synergy (multi-computer shared input devices) new to mac

    - by pedalpete
    I've just gotten a mac and the first thing I'm trying to do in setting it up for dev is to set-up synergy. I've run into countless issues, and am making progress, but that 'mac's just work' theory is getting tired really fast. I'd prefer running the mac as the server, as it's a desktop. why use my pc(laptop) as the server. So i downloaded synergy, and tried to edit the 'synergy.conf' file. First with textmate, then with apple script. Neither of these applications will open the file. Even after changing textmate to use plain text format and switching off a few other things to make it into a plain text editor. No good. Uh, isn't that what these script/text editors are for? Then I found SyneryKM which is a package to run synergy on osx. Took LOTS of fiddling, but I think I finally kinda got it figured out. I've got my pc & mac both running synergy. However, my pc will only connect to synergy using the ip address. No problem, however, my mac won't connect to itself as a synergy server using the IP address. If I use the ip address as a screen in the 'server configuration file', i get a 'Error: unknown screen name mymac'. I know this shouldn't be super complicated, and possibly not the place to find answers about synergy, but I couldn't find a better place, and there has been some talk of synergy here. Any chance somebody can help with this?

    Read the article

  • File is not compatible with the version of Windows you're running

    - by vaccano
    I have a really old installer (legacy app) that we are trying to get running on a Windows 7 64 bit os. Previously it has only been installed on Windows XP 32 bit. I get the following error when I try to run it: The version of this file is not compatible with the version of Windows you're running. Check your computer's system information to see whether you need an x86 (32-bit) or x64 (64-bit) version of the program, and then contact the software publisher. Contacting the software publisher is not an option (software is super old). Is there a way to get this to work? Some sort of compatibility mode? The only thing I have heard of that will work is a Virtual XP on the Win 7 box. The problem is that this software is a part of a whole software set. I would have to put all of the pieces on the Virtual XP or none at all. Before I go down the road of putting it all on the virtual xp I would like to know that there is no way to get it all on the Win 7 os.

    Read the article

  • "Modern" Ethernet over coax

    - by Electrons_Ahoy
    So, I've just bought a house. It's reasonably new - built in the early '00s. One of the features that got built in was a cable TV drop in every room. The cabling is gorgeous - there's even a wiring cabinet of sorts in a closet where the cables all tie together to the splitter to the outside line. Of course, my problem is that I only own the one TV. I do, however, own a few computers. What I would love to be able to do is drop a switch in the wiring closet and run 100/1000BASE-T ethernet over the coax in the walls I wouldn't otherwise be using. My fantasy would be if you could get some kind of adapter-plug-thing that would take a coax plug on one side and a cat5/RJ45 plug on the other. Had anyone else done this? Any suggestions? (There are a few other options that suggest themselves - first, I could just use the existing cabling channels and re-run cat5 or 6 through the walls. While tempting, that sounds like more work than I really want to put in, so I'm calling that Plan B. Also, I could just scare up a mess of old 10BASE2 cards and run the house on thinnet, all mid-90s style. While I think I'd get major style points for that, I don't think I can get a 10BASE2 adapter for the new laptop. Also, I have all these super-snazzy gigabit adaptors I'd like to be using. And so forth.)

    Read the article

  • Browsing Audiobooks on an iPod Nano

    - by Electrons_Ahoy
    The sitation: I've got a stack of audiobooks in MP3 format in my iTunes library, and both an iPhone and an iPod Nano. After this question, I've changed the Media Kind for the audiobook MP3s from Music to Audiobook. This has been, overall, spectacular, as now I can resume where I was, they show up under Audiobooks, etc. On the iPhone, it's also super convenient, since the interface shows what used to be an album with multiple songs as a single book with multiple chapters, and going into "Audiobooks" presents me with a list of books, not tracks. The Nano, on the other hand, is a little strange. After changing the media type and re-syncing the ipod, the files in question are now listed under Audiobooks rather than Music, and the extra Audiobook features are present (resume playback and so on,) but the Audiobooks menu just lists all the MP3 tracks on the iPod in alphabetical order, ignoring whichever book/album they belong to - and doesn't seem to let me browse them any other way. This is, clearly, a little sub-optimal. Did I screw something up? How do I get the Nano to treat the files in a similar way to the way the iPhone and iTunes does - as books with chapters? Is there a step I missed somewhere? Do I need to reformat the iPod? Is this even possible? (Footnote: shameless bump, since this just scored me a tumbleweed.)

    Read the article

  • Is On-The-Fly string replacement possible using GreaseMonkey and Firefox

    - by Gary M. Mugford
    I have looked for means to stop Brightcove videos from autostarting in Firefox and have come to the conclusion it isn't possible without external programming via something like Grease Monkey. However, I'm not proficient in javascript let alone GM. So I thought I'd ask here first whether what I want to do is feasible, or whether it's a fool's errand. What I want to accomplish is have a site specific script executed to replace a string value on the run in that site's code. Specifically, what I am looking for is something GM-style that would do this: if site_domain = 'www.SiteWithAutoPlayVideos.com' then replace_all('<param name="autoStart" value="true" />', '<param name="autoStart" value="false" />'); Having looked through Super User for anything GreaseMonkey that might relate, I see notices that the sandbox GM executes scripts in has to remain separate for security reasons. So, I suspect I might be in for disappointment. BUT if it is accomplishable and somebody here can confirm it, then I will do my best to struggle through the learning curve and get this noisome little problem put to rest. Yes, I have tried Flash Block and FlashDisable in order to attack this issue with no avail. Thanks in advance for your time.

    Read the article

  • LAN network (switch?)

    - by guywhoneedsahand
    I am working on setting up the network for a small LAN party (less than 16 people). Most of them do not have wireless cards in their rigs, so I need to set up some way for everyone to a) play LAN games and b) access the internet. The LAN party will probably take place in my basement, where I have enough space. However, the basement is not wired up with the router which is actually on the floor above. I make a cantenna a while back that can boost the wireless performance of my computer significantly. How can I use this to provide internet and LAN to guests? My hope was that I could use a switch like this http://www.newegg.com/Product/Product.aspx?Item=N82E16833181166 for the LAN - but how can I give people access to the internet? Is there such thing as a network extender / 16-port switch? Obviously, the internet performance doesn't need to be super stellar, because the games will be using LAN - so I am looking to provide some usable internet for web browsing, and very high speed LAN for games. Thanks!

    Read the article

  • Mountain Lion overheating issue have to do with launchd/Python?

    - by Christopher Jones
    So, Ever since I installed ML, my MacBook Air has been running SUPER hot. Opened up activity monitor, and everything seemed to be pretty normal, until I had it refresh every .5 seconds... and then I started seeing some interesting things. A 'Python' process appears and is terminated several times a second, and uses TONS of CPU 70-110. It's parent process is 'launchd' - and when I sample the process, there is a lot going on with Python. http://db.tt/ovuX3hZM These appear and disappear too quickly to get one... this one only happened to be using 70 ish percent of CPU... but they consistently hit 100-110%. http://db.tt/ovuX3hZMg The parent process... launchd. lots of context switches and UNIX system calls... What is the deal here? (photo goes here when I earn the street cred) The sample of launchd. ANY help here could be of help to not only me, but possibly many others experiencing decreased battery life and warmer laps these days because of this Mountain Lion weirdness. PLEASE HELP! PS - I'd put the screen grabs inline, but i don't have enough street cred yet.

    Read the article

  • Dual boot nt4 and windows 98

    - by ItFinallyWorks
    I am trying to dual boot nt4 and windows 98 se (don't laugh - old computer). I have seen Microsoft's instructions for doing this, but it limits windows 98 to have a Fat16 partition (NT4's NTLDR doesn't understand FAT32) and therefore only 2GB of disk space. I really need it to have more than that. I started with Win 98 (on the 1st partition), repartitioned the disk, then added NT4 on the 2nd partition. NT4 took over the bootloader (as expected), so NT4 boots, but Win 98 doesn't. Right now I am working in VMWare so I can use nonpersistent hard drives (IDE like the real computer) to recover from errors easily. I've tried using XPs NTLDR using the instructions here: http://www.nu2.nu/fixnt4/ , but I got weird errors from NT4 and it never really worked. If XP's NTLDR would work, that should be able to boot both OSes. I've also tried using GRUB. In theory that should work. In fact when booting from super grub disk, it does. But as soon as I install grub to disk, Win 98 boots, but NT 4 blue screens at boot with a 0x0000007b inaccessible_boot_device error (that can be alot of things see MS kb 822051). The incantation I'm using for GRUB 1 is rootnoverify (hd0,1) makeactive chainloader +1 boot So, anybody have some suggestions?

    Read the article

  • rsync not copying hard links

    - by A.Ellett
    I have two computers (both MacBook Airs) for which I sync one directory tree in both, but not the entire hard drive or any other directories. Let's say on computer A the directory is /Users/aellett/projects Let's say on computer B the directory is /Users/bellett/projects Generally, I'll log into computer B and then remotely connect to computer A as user 'aellett'. As super user I sync the two project directories as follows: rsync -av /Volumes/aellett/projects/ /Users/bellett/projects/ and this works as expected. On both computers I have another file letter.txt in a different directory which is not getting synced. Let's say on computer A the file is found in /Users/aellett/letters On computer B the file is found in /Users/bellett/correspondence Generally, I don't want to share what's not included in /Users/<username>/projects. But I do want to share this particular file. So on both computer I made a correspondence directory in projects. And then I made hard links as follows On computer A: ln /Users/aellett/letters/letter.txt /Users/aellett/projects/correspondence/letter.txt On computer B: ln /Users/bellett/correspondence/letter.txt /Users/aellett/projects/correspondence/letter.txt The next time I synced the two computers I did the following rsync -av -H /Volumes/aellett/projects/ /Users/bellett/projects/ When I checked on computer B, /Users/bellett/projects/correspondence/letter.txt was correctly synced. But, the hardlink to /Users/bellett/correspondence/letter.txt was no longer there. In other words, /Users/bellett/projects/correspondence/letter.txt was identical to /Users/aellett/projects/correspondence/letter.txt but it differed from /Users/bellett/correspondence/letter.txt. Since these two files were hard linked on both computers, I expected them to still have the hard link. Why are my hard links not being preserved?

    Read the article

  • Google Chrome not loading web pages correctly unless multiple refreshes

    - by Brandon Wilson
    Webpages in Google Chrome do not load correctly from time to time. I can't reproduce it, it just happens. Some times it happens when I load the browser other times it happens when I am just browsing. Just now I went to five different web sites which 3 out of 5 of them did not load correctly. I have attached a photo of how Super User loaded the first time I loaded it. If I refreshed it it will load correctly. Facebook is bad like this. Some times Facebook will load correctly but some of there back end scripting may not load so the page may not refresh automatically. Not sure what is going on. I have tried other browsers (Firefox and Internet Explorer) and they seem to be working correctly. Chrome seems to be acting up only on this computer. All my computers are running Windows 8 and I have removed Chrome completely off this computer and re-installed. I even disabled all extensions and cleared all the caches. I even tried running Chrome without being logged in. Not sure what else to do at this point. An example of superuser.com not loading correctly: When I refresh the problem will go away until it happens again. Sometimes it takes two or three refreshes in order for it to correctly load.

    Read the article

  • How to restore broken Ethernet functionality on Mac G5 running Mac OS 10.4.11 (Tiger)

    - by willc2
    I had a disk error that rendered my Mac unbootable. I repaired it with Tech Tool 4, but now networking does not work. Network Preferences reports that my Ethernet cable is unplugged. I know this is bogus because when I boot from an emergency partition, networking works correctly. Furthermore, wireless networking is also broken, which I tested with a known-good Wi-Fi dongle. Whenever I try to change Network Port Configurations by creating a New or Renaming an existing one, example: I get this message in the console: Error - PortScanner - setDevice, device == nil! Error - PortScanner - setDevice, device == nil! In sets of two as shown. When I try to invoke the Network Diagnostics app, it immediately crashes. My first thought is to reinstall Tiger with the Archive and Install method so I don't have to reinstall all my applications but I have lost my Tiger installer disk. My next thought is to buy Leopard for $107 on Amazon. If there is any way I can just repair my Tiger install I would be happy to save that money, though. This is not my main machine and I am loathe to put more money into it. How can I recover my network functionality? UPDATE: I found my Tiger install disk and tried an Archive and Install. It failed with an unhelpful error message along the lines of "Can't install, try again". I tried again but had the same error. My guess is, some corrupt or missing file in my User folder is preventing migration. I have a backup created with Super Duper that is a bit out of date but will startup the machine (with functional networking). I would love to just copy over the file(s) that got messed up but I don't even know where to look. What is the likely location of the System files that would cause the aforementioned symptoms?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >