Search Results

Search found 21331 results on 854 pages for 'require once'.

Page 81/854 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • What Counts For A DBA: ESP

    - by Louis Davidson
    Now I don’t want to get religious here, and I’m not going to, but what I’m going to describe in this ‘What Counts for a DBA’ installment sometimes feels like magic. Often  I will spend hours thinking about the solution to a design issue or coding problem, working diligently to try to come up with a solution and then finally just give up with the feeling that I’m not even qualified to be a data entry clerk, much less a data architect.  At this point I often take a walk (or sometimes a nap), and then it hits me. I realize that I have the answer just sitting in my brain, ready to implement.  This phenomenon is not limited to walks either; it can happen almost any time after I stop my obsession about a problem. I call this phenomena ESP (or Extra-Sensory Programming.)  Another term for this could be ‘sleeping on it’, and while the idiom tends to mean to let time pass to actively think about a problem, sleeping on a problem also lets you relax and let your brain do the work. I first noticed this back in my college days when I would play video games for hours on end. We would get stuck deep in some dungeon unable to find a way out, playing for days on end until we were beaten down tired. Once we gave up and walked away, the solution would usually be there waiting for one of us before we came back to play the next day.  Sometimes it would be in the form of a dream, and sometimes it would just be that the problem was now easy to solve when we started to play again.  While it worked great for video games, it never occurred when I studied English Literature for hours on end, or even when I worked for the same sort of frustrating hours attempting to solve a homework problem in Calculus.  I believe that the difference was that I was passionate about the video game, and certainly far less so about homework where people used the word “thou” instead of “you” or x to represent a number. This phenomenon occurs somewhat more often in my current work as a professional data programmer, because I am very passionate about SQL and love those aspects of my career choice.  Every day that I get to draw a new data model to solve a customer issue, or write a complex SELECT statement to ferret out the answer to a complex data question, is a great day. I hope it is the same for any reader of this blog.  But, unfortunately, while the day on a whole is great, a heck of a lot of noise is generated in work life. There are the typical project deadlines, along with the requisite project manager sitting on your shoulders shouting slogans to try to make you to go faster: Add in office politics, and the occasional family issues that permeate the mind, and you lose the ability to think deeply about any problem, not to mention occasionally forgetting your own name.  These office realities coupled with a difficult SQL problem staring at you from your widescreen monitor will slowly suck the life force out of your body, making it seem impossible to solve the problem This is when the walk starts; or a nap. Maybe you hide from the madness under your desk like George Costanza hides from Steinbrenner on Seinfeld.  Forget about the problem. Free your mind from the insanity of the problem and your surroundings. Then let your training and education deep in your brain take over and see if it will passively do the rest for you. If you don’t end up with a solution, the worst case scenario is that you have a bit of exercise or rest, and you won’t have heard the phrase “better is the enemy of good enough” even once…which certainly will do your brain some good. Once you stop expecting whipping your brain for information, inspiration may just strike and instead of a humdrum solution you find a solution you hadn’t even considered, almost magically. So, my beloved manager, next time you have an urgent deadline and you come across me taking a nap, creep away quietly because I’m working, doing some extra-sensory programming.

    Read the article

  • View AccuWeather Forecasts in Google Chrome

    - by Asian Angel
    Being able to keep an eye on the weather while at work or browsing the Internet is definitely helpful. If you like detailed forecasts then join us as we take a look at the Forecastfox Weather extension for Google Chrome. Getting Started As soon as the Forecastfox Weather extension has finished installing you will automatically be presented with the “Customize Forecastfox Page”. The default setting is for New York with English measurement units. Enter your location into the blank and hit “Enter” to display the listing for your city/area. If you are presented multiple options to choose from simply click on the appropriate listing. Once you have your city/area displayed you will notice that it is possible to have access to weather forecasts for multiple locations. You can easily remove any unneeded listings with the “Remove Link”. For our example we removed the New York listing. Note: Click on desired locations and measurement units to automatically set them as defaults (no save button required). Forecastfox Weather in Action You can hover your mouse over the “Toolbar Button” to see the current weather conditions. Clicking on the “Toolbar Button” opens a popup window with the current conditions, 7 day forecast, and a static satellite image. If desired you can access additional details for the current weather conditions. Clicking on “details” opens a new tab with a nice bit of information such as UV Index, Moon Phases, Cloud Ceiling, etc. Note: AccuWeather.com webpages will have some ads displayed. Perhaps you need the Hourly Forecast… Once again a new tab will be opened with the predicted hourly weather conditions for the current day. Going back to the popup window you may also select a specific day from the 7 day forecast. You will be presented with a “Day & Night” forecast for the chosen day with links to view “Additional Details & Hourly” information. Interested in the satellite image instead? You can click on either of the available links for larger images. Once the new tab is open you can choose from a variety of different satellite images. Conclusion If you have been wanting a solid weather forecast extension for your Chrome browser then Forecastfox Weather is definitely a recommended install. Links Download the Forecastfox Weather extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Add Weather Forecasts to Google ChromeView Weather Underground Forecasts in Google ChromeView the Time & Date in Chrome When Hiding Your TaskbarView Maps and Get Directions in Google ChromeGoogle Image Search Quick Fix TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • Things I've noticed with DVCS

    - by Wes McClure
    Things I encourage: Frequent local commits This way you don't have to be bothered by changes others are making to the central repository while working on a handful of related tasks.  It's a good idea to try to work on one task at a time and commit all changes at partitioned stopping points.  A local commit doesn't have to build, just FYI, so a stopping point doesn't mean a build point nor a point that you can push centrally.  There should be several of these in any given day.  2 hours is a good indicator that you might not be leveraging the power of frequent local commits.  Once you have verified a set of changes works, save them away, otherwise run the risk of introducing bugs into it when working on the next task.  The notion of a task By task I mean a related set of changes that can be completed in a few hours or less.  In the same token don’t make your tasks so small that critically related changes aren’t grouped together.  Use your intuition and the rest of these principles and I think you will find what is comfortable for you. Partial commits Sometimes one task explodes or unknowingly encompasses other tasks, at this point, try to get to a stopping point on part of the work you are doing and commit it so you can get that out of the way to focus on the remainder.  This will often entail committing part of the work and continuing on the rest. Outstanding changes as a guide If you don't commit often it might mean you are not leveraging your version control history to help guide your work.  It's a great way to see what has changed and might be causing problems.  The longer you wait, the more that has changed and the harder it is to test/debug what your changes are doing! This is a reason why I am so picky about my VCS tools on the client side and why I talk a lot about the quality of a diff tool and the ability to integrate that with a simple view of everything that has changed.  This is why I love using TortoiseHg and SmartGit: they show changed files, a diff (or two way diff with SmartGit) of the current selected file and a commit message all in one window that I keep maximized on one monitor at all times. Throw away / stash commits There is extreme value in being able to throw away a commit (or stash it) that is getting out of hand.  If you do not commit often you will have to isolate the work you want to commit from the work you want to throw away, which is wasted productivity and highly prone to errors.  I find myself doing this about once a week, especially when doing exploratory re-factoring.  It's much easier if I can just revert all outstanding changes. Sync with the central repository daily The rest of us depend on your changes.  Don't let them sit on your computer longer than they have to.  Waiting increases the chances of merge conflict which just decreases productivity.  It also prohibits us from doing deploys when people say they are done but have not merged centrally.  This should be done daily!  Find a way to partition the work you are doing so that you can sync at least once daily. Things I discourage: Lots of partial commits right at the end of a series of changes If you notice lots of partial commits at the end of a set of changes, it's likely because you weren't frequently committing, nor were you watching for the size of the task expanding beyond a single commit.  Chances are this cost you productivity if you use your outstanding changes as a guide, since you would have an ever growing list of changes. Committing single files Committing single files means you waited too long and no longer understand all the changes involved.  It may mean there were overlapping changes in single files that cannot be isolated.  In either case, go back to the suggestions above to avoid this.  Committing frequently does not mean committing frequently right at the end of a day's work. It should be spaced out over the course of several tasks, not all at the end in a 5 minute window.

    Read the article

  • Sign up Today for User Feedback Sessions at Oracle OpenWorld and JavaOne 2012

    - by Lionel Dubreuil
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 You’re Invited to Sign Up for Oracle Usability Feedback Sessions SIGN UP TODAY to get the most from your conference experience by participating in a usability feedback session where your expertise will help Oracle develop outstanding products and solutions. The Oracle User Experience team is conducting a Usability Evaluation on publishing and accessing Oracle Enterprise Repository content when building SOA projects in JDeveloper. We are asking Developers and Architects who build or integrate applications using SOA Suite to take a look at the interaction between JDeveloper with the Enterprise Repository.  We are looking for feedback on the interaction between JDeveloper and Oracle Enterprise Repository so that we may improve the User Interface in a future release. The feedback sessions will be conducted during the Oracle OpenWorld and JavaOne Conferences, at the Intercontinental Hotel in San Francisco, CA. Sessions will last 1 hour and will be held on Monday, October 1 through Wednesday, October 3, 2012. This event fills up quickly, and space is limited. If you are interested in participating, please send an email to [email protected] with the following information: Identification Name: _________________________________ Company Name:  _________________________ Job Title: Email: Phone Number (work, mobile, include country code): Which conference are you attending? _____Oracle OpenWorld _____JavaOne Have you ever participated in usability activities with Oracle or any of its subsidiaries? ____Yes; specify __________________________________________________ ____No Are you currently using JDeveloper? ____Yes ; specify version(s): _______________________________ ____No How long have you used JDeveloper? ____ Less than 1 year ____ 1 - 2 years ____ 3 - 4 years ____ 4 + years Are you currently using SOA features in JDeveloper? ____Yes ____No How long have you used SOA features in JDeveloper? ____ Less than 1 year ____ 1 - 2 years ____ 3 - 4 years ____ 4 + years How often do you use SOA features in JDeveloper? ____ Daily ____ 2 - 3 times a week ____ Once a week  ____ Once a month or less Briefly describe the types of SOA tasks you use JDeveloper to perform: _____________________________________ _____________________________________ Please list your availability If you know your availability; please let me know which day you would prefer to participate, Monday, Tuesday or Wednesday. Limited sessions are available on each day, and each session lasts 1 hour. Thank you for taking the time to complete this questionnaire.  It will help us match you to the best suited feedback session. Once we receive your email, we will contact you to set up a time and day for participation. You'll find more information about our on-site lab on the VoX (Voice of User Experience) blog, and on our Events page at Usable Apps. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • BizTalk&ndash;Mapping repeating EDI segments using a Table Looping functoid

    - by Bill Osuch
    BizTalk’s HIPAA X12 schemas have several repeating date/time segments in them, where the XML winds up looking something like this: <DTM_StatementDate> <DTM01_DateTimeQualifier>232</DTM01_DateTimeQualifier> <DTM02_ClaimDate>20120301</DTM02_ClaimDate> </DTM_StatementDate> <DTM_StatementDate> <DTM01_DateTimeQualifier>233</DTM01_DateTimeQualifier> <DTM02_ClaimDate>20120302</DTM02_ClaimDate> </DTM_StatementDate> The corresponding EDI segments would look like this: DTM*232*20120301~ DTM*233*20120302~ The DateTimeQualifier element indicates whether it’s the start date or end date – 232 for start, 233 for end. So in this example (an X12 835) we’re saying the statement starts on 3/1/2012 and ends on 3/2/2012. When you’re mapping from some other data format, many times your start and end dates will be within the same node, like this: <StatementDates> <Begin>20120301</Begin> <End>20120302</End> </StatementDates> So how do you map from that and create two repeating segments in your destination map? You could connect both the <Begin> and <End> nodes to a looping functoid, and connect its output to <DTM_StatementDate>, then connect both <Begin> and <End> to <DTM_StatementDate> … this would give you two repeating segments, each with the correct date, but how to add the correct qualifier? The answer is the Table Looping Functoid! To test this, let’s create a simplified schema that just contains the date fields we’re mapping. First, create your input schema: And your output schema: Now create a map that uses these two schemas, and drag a Table Looping functoid onto it. The first input parameter configures the scope (or how many times the records will loop), so drag a link from the StatementDates node over to the functoid. Yes, StatementDates only appears once, so this would make it seem like it would only loop once, but you’ll see in just a minute. The second parameter in the functoid is the number of columns in the output table. We want to fill two fields, so just set this to 2. Now drag the Begin and End nodes over to the functoid. Finally, we want to add the constant values for DateTimeQualifier, so add a value of 232 and another of 233. When all your inputs are configured, it should look like this: Now we’ll configure the output table. Click on the Table Looping Grid, and configure it to look like this: Microsoft’s description of this functoid says “The Table Looping functoid repeats with the looping record it is connected to. Within each iteration, it loops once per row in the table looping grid, producing multiple output loops.” So here we will loop (# of <StatementDates> nodes) * (Rows in the table), or 2 times. Drag two Table Extractor functoids onto the map; these are what are going to pull the data we want out of the table. The first input to each of these will be the output of the TableLooping functoid, and the second input will be the row number to pull from. So the functoid connected to <DTM01_DateTimeQualifier> will look like this: Connect these two functoids to the two nodes we want to populate, and connect another output from the Table Looping functoid to the <DTM_StatementDate> record. You should have a map that looks something like this: Create some sample xml, use it as the TestMap Input Instance, and you should get a result like the XML at the top of this post. Technorati Tags: BizTalk, EDI, Mapping

    Read the article

  • Make your TSQL easier to read during a presentation

    - by Jonathan Allen
    SQL Server Management Studio 2012 has some neat settings that you can use to help your presentations at a SQL event better for the attendees if you are willing to spend a few minutes making some settings changes. Historically, I have been reluctant to make changes to my SSMS settings as it is such a tedious process and it’s not 100% clear that what you think you are changing is actually what gets changed. With SSMS 2012 this has become a lot easier and a lot less risky. In any session that involves TSQL there is a trade off between the speaker having all the code on screen and the attendees being able to read any of what is on screen. You (the speaker) might be able to read this when you are working on the code but plenty of your audience wont be able to make head or tail of it. SSMS 2012 has a zoom facility that can help: but don’t go nuts … Having the font too big means you will be scrolling a lot and the code will again be rendered unreadable. There is more though but you need to take a deep breath and open the Tools menu and delve into the SSMS options. In previous versions of SSMS this is a deep, dark and scary place where changing values can be obscure and sometimes catastrophic to the UI when you get back to the code editor. First things first, we set out as a good DBA and save our current (and presumably acceptable) SSMS configuration. From the import and Export Settings you can set up a file to hold all of the settings that you currently have. The wizard will open and ask you to pick an option. This time around choose to export settings. hit next and next again and then name your settings profile in the final step of the wizard and then click Finish. Once this is done then you can change whatever you like and always get back to this configuration in a couple of clicks. So what can you change to make for a good experience? Well there are plenty of things that can be altered but don’t go too mad and change too many things without taking a look at the results for every item on the list above you can change font, size, weight, colour, background colour etc. etc. but consider what you are trying to achieve and take it slowly. I have seen presenters with their settings set to have a yellow highlight and black font rather than the default pale blue background and slightly darker font so to achieve that select Text Editor and then select “Selected Text” in the Display Items listbox. As you change things the Sample area give you an idea of what effect you are going to have. Black and yellow is the colour combination with the highest contrast – that’s why bees and wasps# are that colour. What next? how about increasing the default font for your demo scripts? This means that any script you open and any new ones that you start will take on this font. No more zooming (or forgetting to) in the middle of sessions. now don’t forget to save this profile – follow the same steps as above but give the profile a different name, something like PresentationBigFontHighContrast might be appropriate. Once you are done making changes, export the settings once more and then go into the Import Export wizard and import settings from the first profile you created. Everything will be back to normal. Now making changes to suit your environment can be done very easily and with confidence. * – and warning tape and safety signs and so forth – Health and Safety officers simply copy nature!

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Ops Center 12c - Update - Provisioning Solaris on x86 Using a Card-Based NIC

    - by scottdickson
    Last week, I posted a blog describing how to use Ops Center to provision Solaris over the network via a NIC on a card rather than the built-in NIC.  Really, that was all about how to install Solaris on a SPARC system.  This week, we'll look at how to do the same thing for an x86-based server. Really, the overall process is exactly the same, at least for Solaris 11, with only minor updates. We will focus on Solaris 11 for this blog.  Once I verify that the same approach works for Solaris 10, I will provide another update. Booting Solaris 11 on x86 Just as before, in order to configure the server for network boot across a card-based NIC, it is necessary to declare the asset to associate the additional MACs with the server.  You likely will need to access the server console via the ILOM to figure out the MAC and to get a good idea of the network instance number.  The simplest way to find both of these is to start a network boot using the desired NIC and see where it appears in the list of network interfaces and what MAC is used when it tries to boot.  Go to the ILOM for the server.  Reset the server and start the console.  When the BIOS loads, select the boot menu, usually with Ctrl-P.  This will give you a menu of devices to boot from, including all of the NICs.  Select the NIC you want to boot from.  Its position in the list is a good indication of what network number Solaris will give the device. In this case, we want to boot from the 5th interface (GB_4, net4).  Pick it and start the boot processes.  When it starts to boot, you will see the MAC address for the interface Once you have the network instance and the MAC, go through the same process of declaring the asset as in the SPARC case.  This associates the additional network interface with the server.. Creating an OS Provisioning Plan The simplest way to do the boot via an alternate interface on an x86 system is to do a manual boot.  Update the OS provisioning profile as in the SPARC case to reflect the fact that we are booting from a different interface.  Update, in this case, the network boot device to be GB_4/net4, or the device corresponding to your network instance number.  Configure the profile to support manual network boot by checking the box for manual boot in the OS Provisioning profile. Booting the System Once you have created a profile and plan to support booting from the additional NIC, we are ready to install the server. Again, from the ILOM, reset the system and start the console.  When the BIOS loads, select boot from the Boot Menu as above.  Select the network interface from the list as before and start the boot process.  When the grub bootloader loads, the default boot image is the Solaris Text Installer.  On the grub menu, select Automated Installer and Ops Center takes over from there. Lessons The key lesson from all of this is that Ops Center is a valuable tool for provisioning servers whether they are connected via built-in network interfaces or via high-speed NICs on cards.  This is great news for modern datacenters using converged network infrastructures.  The process works for both SPARC and x86 Solaris installations.  And it's easy and repeatable.

    Read the article

  • Ruby gem installation error after OSX Yosemite and Xcode 6 installation

    - by Andres Trevino
    I tried installing a gem like I did before installing Yosemite, but now I'm getting an error: /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:159:in `synchronize': ERROR: Failed to build gem native extension. (Gem::Ext::BuildError) ERROR: Failed to build gem native extension. deadlock; recursive locking This is the command I wrote: sudo gem install mysql2 This is the message it appears in the terminal: Gem files will remain installed in /Library/Ruby/Gems/2.0.0/gems/autotest-fsevent-0.2.9 for inspection. Results logged to /Library/Ruby/Gems/2.0.0/extensions/universal-darwin-14/2.0.0/autotest-fsevent-0.2.9/gem_make.out Gem files will remain installed in /Library/Ruby/Gems/2.0.0/gems/autotest-fsevent-0.2.9 for inspection. Results logged to /Library/Ruby/Gems/2.0.0/extensions/universal-darwin-14/2.0.0/autotest-fsevent-0.2.9/gem_make.out from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:159:in build_extension' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:198:inblock in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:195:in each' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:195:inbuild_extensions' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:1436:in block in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/user_interaction.rb:45:inuse_ui' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:1434:in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/stub_specification.rb:60:inbuild_extensions' from /Library/Ruby/Site/2.0.0/rubygems/basic_specification.rb:56:in contains_requirable_file?' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:925:inblock in find_inactive_by_path' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:924:in each' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:924:infind' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:924:in find_inactive_by_path' from /Library/Ruby/Site/2.0.0/rubygems.rb:185:intry_activate' from /Library/Ruby/Site/2.0.0/rubygems/core_ext/kernel_require.rb:132:in rescue in require' from /Library/Ruby/Site/2.0.0/rubygems/core_ext/kernel_require.rb:144:inrequire' from /Library/Ruby/Site/2.0.0/rubygems.rb:601:in load_yaml' from /Library/Ruby/Site/2.0.0/rubygems/config_file.rb:328:inload_file' from /Library/Ruby/Site/2.0.0/rubygems/config_file.rb:197:in initialize' from /Library/Ruby/Site/2.0.0/rubygems.rb:289:innew' from /Library/Ruby/Site/2.0.0/rubygems.rb:289:in configuration' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:63:inrun' from /Library/Ruby/Site/2.0.0/rubygems/ext/ext_conf_builder.rb:38:in block in build' from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/tempfile.rb:324:inopen' from /Library/Ruby/Site/2.0.0/rubygems/ext/ext_conf_builder.rb:17:in build' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:161:inblock (2 levels) in build_extension' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:160:in chdir' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:160:inblock in build_extension' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:159:in synchronize' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:159:inbuild_extension' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:198:in block in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:195:ineach' from /Library/Ruby/Site/2.0.0/rubygems/ext/builder.rb:195:in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:1436:inblock in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/user_interaction.rb:45:in use_ui' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:1434:inbuild_extensions' from /Library/Ruby/Site/2.0.0/rubygems/stub_specification.rb:60:in build_extensions' from /Library/Ruby/Site/2.0.0/rubygems/basic_specification.rb:56:incontains_requirable_file?' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:925:in block in find_inactive_by_path' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:924:ineach' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:924:in find' from /Library/Ruby/Site/2.0.0/rubygems/specification.rb:924:infind_inactive_by_path' from /Library/Ruby/Site/2.0.0/rubygems.rb:185:in try_activate' from /Library/Ruby/Site/2.0.0/rubygems/core_ext/kernel_require.rb:132:inrescue in require' from /Library/Ruby/Site/2.0.0/rubygems/core_ext/kernel_require.rb:144:in require' from /Library/Ruby/Site/2.0.0/rubygems.rb:601:inload_yaml' from /Library/Ruby/Site/2.0.0/rubygems/config_file.rb:328:in load_file' from /Library/Ruby/Site/2.0.0/rubygems/config_file.rb:197:ininitialize' from /Library/Ruby/Site/2.0.0/rubygems/gem_runner.rb:74:in new' from /Library/Ruby/Site/2.0.0/rubygems/gem_runner.rb:74:indo_configuration' from /Library/Ruby/Site/2.0.0/rubygems/gem_runner.rb:39:in run' from /usr/bin/gem:21:in' I am using OSX 10.10 and Xcode 6 Beta. Do any of you guys have any idea as to what to do about this? Thanks in advance

    Read the article

  • How can I install the BlackBerry v5.0.0 component pack into Eclipse?

    - by Dan J
    I'm trying to install the latest v5.0.0 "beta 2" BlackBerry OS Component Pack into Eclipse 3.4.2 with BlackBerry Eclipse plugin v1.0.0.67, but have hit a few problems. Has anybody found an easy way to do this? I had no trouble installing the v4.5.0 and v4.7.0 Component Packs. It's rather strange that BlackBerry are shipping new phones with the v5.0.0 OS installed (e.g. a Storm 2 9550 and Bold 9700 that I just bought), and pushing that update to phones whilst the BlackBerry website still considers the v5.0.0 SDK / Component Packs to be "beta 2"! If anybody knows when an official non-beta Component Pack is going to be released that might solve my problem... In case it helps, the problems I've hit so far are: -Contrary to the implication on the BlackBerry website, the Eclipse "Software Update..." option for the v5.0.0 Component Pack claims it only works on the v1.0.0 Eclipse BlackBerry plugin, not the new v1.1 one. -I then tried to install the v5.0.0 Component Pack through the "Software Updates..." menu in Eclipse using the v1.0.0 Eclipse BlackBery plugin. Once I'd done the 200MB download the install failed with a "Invalid zip file format" error. -I might just have been unlucky with a corrupted download but I did try it twice, once through "Software Updates..." and once by selecting "Archive" to install the downloaded Component Pack (which unlike v4.5.0 and v4.7.0 was a JAR, not a ZIP).

    Read the article

  • NHibernate save / update event listeners: listening for child object saves

    - by James Allen
    I have an Area object which has many SubArea children: public class Area { ... public virtual IList<SubArea> SubAreas { get; set; } } he children are mapped as a uni-directional non-inverse relationship: public class AreaMapping : ClassMap<Area> { public AreaMapping() { HasMany(x => x. SubAreas).Not.Inverse().Cascade.AllDeleteOrphan(); } } The Area is my aggregate root. When I save an area (e.g. Session.Save(area) ), the area gets saved and the child SubAreas automatically cascaded. I want to add a save or update event listener to catch whenever my areas and/or subareas are persisted. Say for example I have an area, which has 5 SubAreas. If I hook into SaveEventListeners: Configuration.EventListeners.SaveEventListeners = new ISaveOrUpdateEventListener[] { mylistener }; When I save the area, Mylistener is only fired once only for area (SubAreas are ignored). I want the 5 SubAreas to be caught aswell in the event listener. If I hook into SaveOrUpdateEventListeners instead: Configuration.EventListeners.SaveOrUpdateEventListeners = new ISaveOrUpdateEventListener[] { mylistener }; When I save the area, Mylistener is not fired at all. Strangely, if I hook into SaveEventListeners and SaveOrUpdateEventListeners: Configuration.EventListeners.SaveEventListeners = new ISaveOrUpdateEventListener[] { mylistener }; Configuration.EventListeners.SaveOrUpdateEventListeners = new ISaveOrUpdateEventListener[] { mylistener }; When I save the area, Mylistener is fired 11 times: once for the area, and twice for each SubArea! (I think because NHIbernate is INSERTing the SubArea and then UPDATING with the area foreign key). Does anyone know what I'm doing wrong here, and how I can get the listener to fire once for each area and subarea?

    Read the article

  • Missing a constant on load.. how can i get around this? (Rails::Plugin::OpenID)

    - by Chris Kimpton
    I have a Rails 2 project that I am trying to upgrade to Rails 3, but getting some issues with bundler. When I run "rake", it runs the tests just fine. But when I run "bundle exec rake" it fails to find a constant. The error is this: /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/dependencies.rb:131:in `const_missing': uninitialized constant Rails::Plugin::OpenID (NameError) from /Users/kimptoc/Documents/ruby/borisbikes/borisbikestats.pre3/vendor/plugins/open_id_authentication/init.rb:16:in `evaluate_init_rb' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:182:in `call' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:182:in `evaluate_method' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:166:in `call' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:90:in `run' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:90:in `each' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:90:in `send' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:90:in `run' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/activesupport-2.3.9/lib/active_support/callbacks.rb:276:in `run_callbacks' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/actionpack-2.3.9/lib/action_controller/dispatcher.rb:51:in `send' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/actionpack-2.3.9/lib/action_controller/dispatcher.rb:51:in `run_prepare_callbacks' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/rails-2.3.9/lib/initializer.rb:631:in `prepare_dispatcher' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/rails-2.3.9/lib/initializer.rb:185:in `process' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/rails-2.3.9/lib/initializer.rb:113:in `send' from /Users/kimptoc/.rvm/gems/ruby-1.8.7-p330@p-borisbikestats-pre-rails3/gems/rails-2.3.9/lib/initializer.rb:113:in `run' from /Users/kimptoc/Documents/ruby/borisbikes/borisbikestats.pre3/config/environment.rb:9 from ./test/test_helper.rb:2:in `require' from ./test/test_helper.rb:2 I have these gems installed: $ gem list *** LOCAL GEMS *** actionmailer (2.3.9) actionpack (2.3.9) activerecord (2.3.9) activeresource (2.3.9) activesupport (2.3.9) authlogic (2.1.3) bundler (1.0.7) gravtastic (2.2.0) linecache (0.43) mocha (0.9.10) newrelic_rpm (2.13.4) parseexcel (0.5.2) rack (1.1.0) rack-openid (1.1.1) rails (2.3.9) rake (0.8.7) ruby-debug-base (0.10.5.jb2, 0.10.4) ruby-debug-ide (0.4.15) ruby-openid (2.1.8, 2.1.7, 2.0.4) sqlite3-ruby (1.3.2) The bundler Gemfile is as follows: source 'http://rubygems.org' #gem 'rails', '3.0.3' gem "rails", "2.3.9" gem "activesupport", "2.3.9" gem "ruby-openid", "2.1.7", :require => "openid" #gem "authlogic-oid", "1.0.4" # Bundle edge Rails instead: # gem 'rails', :git => 'git://github.com/rails/rails.git' gem 'sqlite3-ruby', :require => 'sqlite3' gem "authlogic", "= 2.1.3" gem "newrelic_rpm" # gem "facebooker" gem "parseexcel" gem 'gravtastic', '= 2.2.0' gem "rack-openid", '=1.1.1', :require => 'rack/openid' # not sure what this does... gem "mocha" I have these plugins installed: 2dc_jqgrid authlogic_openid open_id_authentication squirrel I see these similar questions: Missing a constant on load.. how can i get around this? and Requiring gem in Rails 3 Controller failing with "Constant Missing" But their solutions dont seem to work for my situation. I am guessing the issue is around the plugins, but my ruby-foo is too weak. Thanks in advance, Chris

    Read the article

  • Updating UIView subclass contents

    - by Brett
    Hi; I am trying to update a UIView subclass to draw individual pixels (rectangles) after calculating if they are in the mandelbrot set. I am using the following code but am getting a blank screen: //drawingView.h ... -(void)drawRect:(CGRect)rect{ CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetRGBFillColor(context, r, g, b, 1); CGContextSetRGBStrokeColor(context, r, g, b, 1); CGRect arect = CGRectMake(x,y,1,1); CGContextFillRect(context, arect); } ... //viewController.m ... - (void)viewDidLoad { [drawingView setX:10]; [drawingView setY:10]; [drawingView setR:0.0]; [drawingView setG:0.0]; [drawingView setB:0.0]; [super viewDidLoad]; [drawingView setNeedsDisplay]; } Once I figure out how to display one pixel, I'd like to know how to go about updating the view by adding another pixel (once calculated). Once all the variables change, how do I keep the current view as is but add another pixel? Thanks, Brett

    Read the article

  • Semantic Grid System, Media Query issue

    - by Andy
    I'm using the Semantic Grid System to build a responsive site. However, something isn't quite right with the media queries that should obviously kick in once it hits a particular screen size. I'll reference what i mean with their example on the website : if I view this on my iPhone for example, given that it is supposed to adjust to a single column structure on a mobile device, it still throws out the web version of the page. That is true for both Safari and Chrome on my iPhone. However, if I use the RWD bookmarklet to check it's appearance at different resolutions it appears as expected for the mobile resolution. Also, ironically, if I resize the page in Safari on my desktop it also adjusts accordingly once I get down to the approriate screen size, but not in Firefox. The media query that it uses once it hits 720px is @media screen and (max-width: 720px) { #maincolumn, #sidebar { .column(12); margin-bottom: 1em; } } and I might be wide of the mark here but I think that must be the issue. But given that this is directly from the semantic.gs website I'm not inclined to question their own code. Any idea what the problem might be?

    Read the article

  • Applying a Dojo Toolkit (Dijit) theme to ASP.NET pages.

    - by mcoolbeth
    In the code below, I am trying to apply a Dijit theme to the controls in my .aspx page. However, the controls persist in their normal, unthemed appearance. Anybody know why? Master Page: <%@ Master Language="C#" AutoEventWireup="true" CodeBehind="Main.master.cs" Inherits="WebJournalEntryClient.Main" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>My Web Application</title> <link rel="stylesheet" href="dojoroot/dijit/themes/tundra/tundra.css" /> <script type="text/javascript" src="dojoroot/dojo/dojo.js"/> <script type="text/javascript"> dojo.require("dijit.form.Button"); dojo.require("dijit.form.TextBox"); dojo.require("dijit.form.ComboBox"); </script> </head> <body class = "tundra"> <form id="form1" runat="server"> <div> <div> This is potentially space for a header bar. </div> <table> <tr> <td> Maybe <br /> a <br /> Side <br /> bar. </td> <td> <asp:ContentPlaceHolder ID="CenterPlaceHolder" runat="server"/> </td> </tr> </table> <div> This is potentially space for a footer bar. </div> </div> </form> </body> </html> Content Page: <%@ Page Title="" Language="C#" MasterPageFile="~/Main.Master" AutoEventWireup="true" CodeBehind="LogIn.aspx.cs" Inherits="WebJournalEntryClient.LogIn" %> <asp:Content ID="Content" ContentPlaceHolderID="CenterPlaceHolder" runat="server"> <div> User ID: <asp:TextBox ID = "UserName" dojoType="dijit.form.TextBox" runat="server" /><br /> Password: <asp:TextBox ID = "PassWord" dojoType="dijit.form.TextBox" runat="server" /><br /> <asp:Button ID="LogInButton" Text="Log In" dojoType="dijit.form.Button" runat="server" /> </div> </asp:Content>

    Read the article

  • Compressing xls content with apache deflate module

    - by Clinton Bosch
    I am trying to compress an excel spreadsheet being sent from my application using apache deflate module. I have added the following line to the my sites-enabled file: AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript application/excel But is seems to make the response data bigger??? Using firebug, without the module I downloaded the xls spreadsheet from the application and it downloaded 100Kb of data, the file size once on the filesystem was also 100Kb as expected. Once I enabled the deflate module as described above and repeated the process, the amount of data downloaded was 295Kb?? but the file was still only 100Kb once save on the filesystem. As an experiment I manually gzipped the saved xls file and it compressed to 20Kb. What am I doing wrong here? Using deflate (Firebug output): 200 OK xxxxxxx.co.za 293 KB 4.43s ParamsHeadersPostPutResponseCacheHTML Response Headers Date Tue, 03 Nov 2009 13:01:43 GMT Server Apache/2.2.4 (Ubuntu) mod_jk/1.2.23 PHP/5.2.3-1ubuntu6.4 mod_ssl/2.2.4 OpenSSL/0.9.8e Content-Disposition attachment; filename="Employee List.xls" Vary Accept-Encoding Content-Encoding gzip Content-Type application/excel Without deflate (Firebug output): 200 OK xxxxxxxx.co.za 100 KB 3.46s ParamsHeadersPostPutResponseCacheHTML Response Headers Date Tue, 03 Nov 2009 13:06:00 GMT Server Apache/2.2.4 (Ubuntu) mod_jk/1.2.23 PHP/5.2.3-1ubuntu6.4 mod_ssl/2.2.4 OpenSSL/0.9.8e Content-Disposition attachment; filename="Employee List.xls" Content-Length 102912 Content-Type application/excel

    Read the article

  • How do I represent concurrent actions in jBPM, any of which can end a process?

    - by tpdi
    An example: a permit must be examined by two lawyers and one engineer. If any of those three reject it, the process enters a "rejected" end state. If all three grant the permit, it enters a "granted" end state. All three examiners may examine simultaneously, or in any order. Once one engineer has granted it, it shouldn't be available to be examined by an engineer; once two lawyers have examined it, it shouldn't be available to lawyers; once one engineer and two lawyers have examined it should go to the granted end state. My initial thinking is that either I have a overly complicated state transition diagram, with "the same" intermediate states multiply repeated, or I carry (external) state with the process { bool rejected; int engineerSignoffId; int lawyer1SignoffId; int lawyer2SignoffId}. Or something like this? If so, how does the engineer's rejection terminate the subprocess that is in "Lawyers"? START->FORK->Engineer->Granted?---------------->Y->JOIN-->Granted |->Lawyers-->Granted?->by 2 lawyers?->Y---^ ^ | |--------------------------N What's the canonical jBPM answer to this? Can you point me to examples or documentation of such answers? Thanks.

    Read the article

  • python thread prob after build

    - by Apache
    hi expert, i'm having task to scan wifi at specific interval and send it to the server, i've it in python and its works fine when i run manually, then build it to package and when run there is no progress at all, i already ask this question before at http://stackoverflow.com/questions/2735410/python-scritp-problem-once-build-and-package-it, then, i re-modify my code as below, then i found that thread is not functioning once i build, #!/usr/bin/env python import subprocess,threading,... configFile = open('/opt/Jemapoh_Wifi/config.txt', 'r') url = configFile.readline().strip() intervalTime = configFile.readline().strip() status = configFile.readline().strip() print "url "+url print "intervalTime "+intervalTime print "Status "+status.strip() def getMacAddress(): proc = subprocess.Popen('ifconfig -a wlan0 | grep HWaddr | sed \'/^.*HWaddr */!d; s///;q\'', shell=True, stdout=subprocess.PIPE, ) macAddress = proc.communicate()[0].strip() return macAddress def getTimestamp(): from time import strftime timeStamp = strftime("%Y-%m-%d %H:%M:%S") return timeStamp def scanWifi(): try: print "Scanning..." proc = subprocess.Popen('iwlist scan 2>/dev/null', shell=True, stdout=subprocess.PIPE, ) stdout_str = proc.communicate()[0] stdout_list=stdout_str.split('\n') essid=[] rssi=[] preQuality=[] for line in stdout_list: line=line.strip() match=re.search('ESSID:"(\S+)"',line) if match: essid.append(match.group(1)) match=re.search('Quality=(\S+)',line) if match: preQuality.append(match.group(1)) for qualityConversion in preQuality: qualityConversion = qualityConversion.split()[0].split('/') temp = str(int(round(float(qualityConversion[0]) / float(qualityConversion[1]) * 100))).rjust(2) rssi.append(temp) dataToPost = '{"userId":"' + getMacAddress() + '","timestamp":"' + getTimestamp() + '","wifi":[' for no in range(len(essid)): dataToPost += '{"ssid":"' + essid[no] + '","rssi":"' + rssi[no] + '"}' if no+1 == len(essid): pass else: dataToPost += ',' dataToPost += ']}' query_args = {"data":dataToPost} request = urllib2.Request(url) request.add_data(urllib.urlencode(query_args)) request.add_header('Content-Type', 'application/x-www-form-urlencoded') print "Waiting for server response..." print urllib2.urlopen(request).read() print "Data Sent @ " + getTimestamp() print "------------------------------------------------------" t = threading.Timer(int(intervalTime), scanWifi).start() except Exception, e: print e t = threading.Timer(int(intervalTime), scanWifi) t.start() once build, its not reaching the thread, do can anyone help, why the thread is not working after build thanks

    Read the article

  • Rails routing to XML/JSON without views gone mad

    - by John Schulze
    I have a mystifying problem. In a very simple Ruby app i have three classes: Drivers, Jobs and Vehicles. All three classes only consist of Id and Name. All three classes have the same #index and #show methods and only render in JSON or XML (this is in fact true for all their CRUD methods, they are identical in everything but name). There are no views. For example: def index @drivers= Driver.all respond_to do |format| format.js { render :json => @drivers} format.xml { render :xml => @drivers} end end def show @driver = Driver.find(params[:id]) respond_to do |format| format.js { render :json => @driver} format.xml { render :xml => @driver} end end The models are similarly minimalistic and only contain: class Driver< ActiveRecord::Base validates_presence_of :name end In routes.rb I have: map.resources :drivers map.resources :jobs map.resources :vehicles map.connect ':controller/:action/:id' map.connect ':controller/:action/:id.:format' I can perform POST/create, GET/index and PUT/update on all three classes and GET/read used to work as well until I installed the "has many polymorphs" ActiveRecord plugin and added to environment.rb: require File.join(File.dirname(__FILE__), 'boot') require 'has_many_polymorphs' require 'active_support' Now for two of the three classes I cannot do a read any more. If i go to localhost:3000/drivers they all list nicely in XML but if i go to localhost:3000/drivers/3 I get an error: Processing DriversController#show (for 127.0.0.1 at 2009-06-11 20:34:03) [GET] Parameters: {"id"=>"3"} [4;36;1mDriver Load (0.0ms)[0m [0;1mSELECT * FROM "drivers" WHERE ("drivers"."id" = 3) [0m ActionView::MissingTemplate (Missing template drivers/show.erb in view path app/views): app/controllers/drivers_controller.rb:14:in `show' ...etc This is followed a by another unexpected error: Processing ApplicationController#show (for 127.0.0.1 at 2009-06-11 21:35:52)[GET] Parameters: {"id"=>"3"} NameError (uninitialized constant ApplicationController::AreaAccessDenied): ...etc What is going on here? Why does the same code work for one class but not the other two? Why is it trying to do a #view on the ApplicationController? I found that if I create a simple HTML view for each of the three classes these work fine. To each class I add: format.html # show.html.erb With this in place, going to localhost:3000/drivers/3 renders out the item in HTML and I get no errors in the log. But if attach .xml to the URL it again fails for two of the classes (with the same error message as before) while one will output XML as expected. Even stranger, on the two failing classes, when adding .js to the URL (to trigger JSON rendering) I get the HTML output instead! Is it possible this has something to do with the "has many polymorphs" plugin? I have heard of people having routing issues after installing it. Removing "has many polymorphs" and "active support" from environment.rb (and rebooting the sever) seems to make no difference whatsoever. Yet my problems started after it was installed. I've spent a number of hours on this problem now and am starting to get a little desperate, Google turns up virtually no information which makes me suspect I must have missed something elementary. Any enlightenment or hint gratefully received! JS

    Read the article

  • Singleton Issues in iOS http client

    - by Andrew Lauer Barinov
    I implemented an HTTP client is iOS that is meant to be a singleton. It is a subclass of AFNetworking's excellent AFHTTPClient My access method is pretty standard: + (LBHTTPClient*) sharedHTTPClient { static dispatch_once_t once = 0; static LBHTTPClient *sharedHTTPClient = nil; dispatch_once(&once, ^{ sharedHTTPClient = [[LBHTTPClient alloc] initHTTPClient]; }); return sharedHTTPClient; } // my init method - (id) initHTTPClient { self = [super initWithBaseURL:[NSURL URLWithString:kBaseURL]]; if (self) { // Sub class specific initializations } return self; } Super's init method is very long, and can be found here However the problem I experience is when the dispatch_once queue is run, the app locks and become unresponsive. When I step through code, I notice that it locks on dispatch_once and then remains frozen. Is this something to do with the main thread locking down? How come it stays locked like that? Also, none of the HTTP client methods fire. Stepping through the code some more, I find this line in dispatch/once.h is where the app locks down: DISPATCH_INLINE DISPATCH_ALWAYS_INLINE DISPATCH_NONNULL_ALL DISPATCH_NOTHROW void _dispatch_once(dispatch_once_t *predicate, dispatch_block_t block) { if (DISPATCH_EXPECT(*predicate, ~0l) != ~0l) { dispatch_once(predicate, block); // App locks here } }

    Read the article

  • Refresh decorator

    - by Morgoth
    I'm trying to write a decorator that 'refreshes' after being called, but where the refreshing only occurs once after the last function exits. Here is an example: @auto_refresh def a(): print "In a" @auto_refresh def b(): print "In b" a() If a() is called, I want the refresh function to be run after exiting a(). If b() is called, I want the refresh function to be run after exiting b(), but not after a() when called by b(). Here is an example of a class that does this: class auto_refresh(object): def __init__(self, f): print "Initializing decorator" self.f = f def __call__(self, *args, **kwargs): print "Before function" if 'refresh' in kwargs: refresh = kwargs.pop('refresh') else: refresh = False self.f(*args, **kwargs) print "After function" if refresh: print "Refreshing" With this decorator, if I run b() print '---' b(refresh=True) print '---' b(refresh=False) I get the following output: Initializing decorator Initializing decorator Before function In b Before function In a After function After function --- Before function In b Before function In a After function After function Refreshing --- Before function In b Before function In a After function After function So when written this way, not specifying the refresh argument means that refresh is defaulted to False. Can anyone think of a way to change this so that refresh is True when not specified? Changing the refresh = False to refresh = True in the decorator does not work: Initializing decorator Initializing decorator Before function In b Before function In a After function Refreshing After function Refreshing --- Before function In b Before function In a After function Refreshing After function Refreshing --- Before function In b Before function In a After function Refreshing After function because refresh then gets called multiple times in the first and second case, and once in the last case (when it should be once in the first and second case, and not in the last case).

    Read the article

  • Capistrano fails for multiple host deployments

    - by morris082
    I be at a loss here, and after scouring the seas (read: internet) for solutions I am left with none other than to hit up the stack. any help appreciated. I have capistrano running locally for deployments onto several different environments. (I'm on windows 7, fwiw). All was well until I needed to deploy to multiple :app servers during a single deployment. Usually I'm prompted for my ssh passphrase once when I call 'cap deploy'. I have ssh-agent running (git never pesters for my pass) but despite this Capistrano has always bugged me once each deployment. Regardless, it always worked when deploying to ONE host. Now, when I attempt to deploy to multiple servers at once, it asks for my passphrase what appears to be multiple times: (ips removed by ME) servers: ["redacted", "redacted"]<br /> Enter passphrase for ~/.ssh/id_rsa: Enter passphrase for ~/.ssh/id_rsa: So with the above I enter my passphrase but this doesn't work. It waits as little while, then spits out this error: connection failed for: <one of the server ips> (NoMethodError: undefined method `overwrite' for nil:NilClass) And that's the end of that. I can "passwordless" ssh into the servers I'm deploying on just fine. I'm pretty certain the ssh-agent is running since I can hit Git w/out entering my passphrase every time Using 'forward_agent' setting in cap deploy did not work. This is my role: role :app, "ip 1 removed", "ip 2 removed" If i set default_run_options[:max_hosts] = 1, it works OK but it asks for my passphrase for every single connection to each host I'm deploying to.. which ends up being a lot. Essentially I'm looking for any of the below (but not limited to): - "You're never going to fix that on windows" - "This is how you get REAL passwordless deployment in capistrano" - "Have you overlooked this setting/feature?" - "I have a rock that can fix anything, you may borrow it" Thanks!

    Read the article

  • Rhinomocks DynamicMock question

    - by epitka
    My dynamic mock behaves as Parial mock, meaning it executes the actual code when called. Here are the ways I tried it var mockProposal = _mockRepository.DynamicMock<BidProposal>(); SetupResult.For(mockProposal.CreateMarketingPlan(null, null, null)).IgnoreArguments().Repeat.Once().Return( copyPlan); //Expect.Call(mockProposal.CreateMarketingPlan(null, null, null)).IgnoreArguments().Repeat.Once().Return( // copyPlan); // mockProposal.Expect(x => x.CreateMarketingPlan(null, null, null)).IgnoreArguments().Return(copyPlan).Repeat.Once(); Instead of just returning what I expect it runs the code in the method CreateMarketingPlan Here is the error: System.NullReferenceException: Object reference not set to an instance of an object. at Policy.Entities.MarketingPlan.SetMarketingPlanName(MarketingPlanDescription description) in MarketingPlan.cs: line 76 at Policy.Entities.MarketingPlan.set_MarketingPlanDescription(MarketingPlanDescription value) in MarketingPlan.cs: line 91 at Policy.Entities.MarketingPlan.Create(PPOBenefits ppoBenefits, MarketingPlanDescription marketingPlanDescription, MarketingPlanType marketingPlanType) in MarketingPlan.cs: line 23 at Policy.Entities.BidProposal.CreateMarketingPlan(PPOBenefits ppoBenefits, MarketingPlanDescription marketingPlanDescription, MarketingPlanType marketingPlanType) in BidProposal.cs: line 449 at Tests.Policy.Services.MarketingPlanCopyServiceTests.can_copy_MarketingPlan_with_all_options() in MarketingPlanCopyServiceTests.cs: line 32 Update: I figured out what it was. Method was not "virtual" so it could not be mocked because non-virtual methods cannot be proxied.

    Read the article

  • Horizontally-centering an absolute position to match a relative position

    - by Chris Vandevelde
    I'm trying to make a div box, containing various elements, fixed at the top of the page once the page has been scrolled so that the box would normally be out of view, but scroll normally until that point (like the behaviour at http://perldoc.perl.org/perl.html). The conditionally-fixed part is pretty simple to implement (set the position to "fixed" once the user has scrolled past a certain point, and "static" once it's scrolled back up), but I'm having trouble with the positioning and dimensions; it screws up if I'm not specifying an absolute position (if I'm using % or "auto", rather than px, em, cm, etc.) or it, confusingly, left-aligns if the box is less than the width of the page. I can understand why, more or less, I'm just trying to fix it. My strategy right now is to have an invisible DIV hold the place of the box and use Prototype's clonePosition() function to hold its position, but it doesn't seem to work for some reason. Neither does copying the margin from one element to the other. Any ideas? Bonus points and eternal gratitude if you can come up with an idea that will adjust itself with the browser window (like auto margins) without setting an onresize event.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >