Search Results

Search found 5682 results on 228 pages for 'lord of scripts'.

Page 48/228 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • SSISDB Analysis Script on Gist

    - by Davide Mauri
    I've created two simple, yet very useful, script to extract some useful data to quickly monitor SSIS packages execution in SQL Server 2012 and after.get-ssis-execution-status  get-ssis-data-pumped-rows  I've started to use gist since it comes very handy, for this "quick'n'dirty" scripts and snippets, and you can find the above scripts and others (hopefully the number will increase over time...I plan to use gist to store all the code snippet I used to store in a dedicated folder on my machine) there.Now, back to the aforementioned scripts. The first one ("get-ssis-execution-status") returns a list of all executed and executing packages along with latest successful and running executions (so that on can have an idea of the expected run time)error messageswarning messages related to duplicate rows found in lookupsthe second one ("get-ssis-data-pumped-rows") returns information on DataFlows status. Here there's something interesting, IMHO. Nothing exceptional, let it be clear, but nonetheless useful: the script extract information on destinations and row sent to destinations right from the messages produced by the DataFlow component. This helps to quickly understand how many rows as been sent and where...without having to increase the logging level.Enjoy! PSI haven't tested it with SQL Server 2014, but AFAIK they should work without problems. Of course any feedback on this is welcome. 

    Read the article

  • Broadcom BCM4313 takes ages to connect

    - by Drazgo
    I'm having issues with my broadcom BCM4313 wireless adapter. Everything works just fine when connected (with additional drivers & Connman), but it takes about 5 minutes to connect to my network when i just started my computer! When resuming from hibernation it goes very quick though, so just when I boot my pc it's taking forever... This is what I found in the dmesg output: [ 16.778057] eth1: Broadcom BCM4727 802.11 Hybrid Wireless Controller 5.60.48.36 [ 16.808768] type=1400 audit(1295859939.727:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient3" pid=833 comm="apparmor_parser" [ 16.808815] type=1400 audit(1295859939.727:3): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient3" pid=799 comm="apparmor_parser" [ 16.808825] type=1400 audit(1295859939.727:4): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient3" pid=826 comm="apparmor_parser" [ 16.809367] type=1400 audit(1295859939.727:5): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=833 comm="apparmor_parser" [ 16.809415] type=1400 audit(1295859939.727:6): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=799 comm="apparmor_parser" [ 16.809435] type=1400 audit(1295859939.727:7): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=826 comm="apparmor_parser" [ 16.809705] type=1400 audit(1295859939.727:8): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=833 comm="apparmor_parser" [ 16.809755] type=1400 audit(1295859939.727:9): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=799 comm="apparmor_parser" [ 16.809769] type=1400 audit(1295859939.727:10): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=826 comm="apparmor_parser" [ 16.844083] alloc irq_desc for 22 on node -1 [ 16.844087] alloc kstat_irqs on node -1 Any ideas how come? Thanks in advance!

    Read the article

  • TFS SQL Deployment Data Script

    - by Greg
    We are using TFS and SQL 2005 (looking to upgrade to SQL 2012 if that makes a difference). We store our database schema in a Visual Studio Database project (VS 2010). When code is released to live we currently use the Visual Studio Database Project to build a script for all our schema changes. The problem we have been getting is having to alter or add to that script to add/fix data for the deployment. For example if we add a new non-nullable column to an existing table we need to populate that column with data during the insert. Other times we may want to create new records in transactional tables (e.g. assign specific users to a new security access). Do Visual Studio Database Projects have a way to store these scripts that only need to be run once and somehow include them in the build? Does it know which scripts need to be run (for example if we are inserting default data we don't want to do that again a second time)? OR Is there a better way to manage these scripts?

    Read the article

  • Best practices in versioning

    - by Gerenuk
    I develop some scripts for data analysis in a small team. For the moment we use SVN, but not in a very structured way. We haven't even looked how to use branches even though we need this functionality. What do you suggest as the best practice to setup the following system: two code bases (core and plugins) versions can be incompatible to previous scripts sometimes individual features are being developed and not yet finished, while other fixes have to be done urgently to the code In the end we don't deliver the code as a package, but rather place the Python scripts in some directory (with version names?). Some other python script which serves as a configuration choses the desired version, sets the path to these libraries and then starts to import the modules. I saw stable releases to be named "trunk" so I did the same. However, no version numbers yet. Core and plugins are different repositories, however we have to match versions for compatibility. Can you suggest some best practices or reference to ease development and reduce chaos? :) Some suggested GIT. I haven't heard about it, but I'm free to change.

    Read the article

  • Turn a Kindle into a Weather Display Station

    - by Jason Fitzpatrick
    The e-ink display, network connectivity, and low-power consumption of Kindle ebook readers make them a perfect candidate for an infrequently refreshed high-visibility display–like a weather display. Read on to see how to hack a Kindle to serve up the local weather. Tinker and hardware hacker Matt Petroff hacked his Kindle to accept input from a web server and then, graciously and in the spirit of geeky projects everywhere, shared his source code. He explains the heart of the project: The server side of the system uses shell and Python scripts to convert weather forecast data into an image for the Kindle. The scripts first download and parse forecast data from NOAA via the National Digital Forecast Database XML/SOAP Service. After parsing the data, the data then needs to be converted into an image. This is accomplished by preprocessing a specially crafted SVG file to insert temperatures, forecast symbols, and days of the week. This SVG is then rendered as a PNG using rsvg-convert and converted to a grayscale, no transparency color space as required by the Kindle using pngcrush. Finally, it is copied to a public location on the web server. The Kindle is set to refresh twice a day (you could easily tweak the scripts for a more frequent refresh) and displays the forecast as seen in the photo above–with crisp and easy to read text and icons. Hit up the link below for more information and the project’s source code. How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • MongoMapper and migrations

    - by Clint Miller
    I'm building a Rails application using MongoDB as the back-end and MongoMapper as the ORM tool. Suppose in version 1, I define the following model: class SomeModel include MongoMapper::Document key :some_key, String end Later in version 2, I realize that I need a new required key on the model. So, in version 2, SomeModel now looks like this: class SomeModel include MongoMapper::Document key :some_key, String key :some_new_key, String, :required => true end How do I migrate all my existing data to include some_new_key? Assume that I know how to set a reasonable default value for all the existing documents. Taking this a step further, suppose that in version 3, I realize that I really don't need some_key at all. So, now the model looks like this class SomeModel include MongoMapper::Document key :some_new_key, String, :required => true end But all the existing records in my database have values set for some_key, and it's just wasting space at this point. How do I reclaim that space? With ActiveRecord, I would have just created migrations to add the initial values of some_new_key (in the version1 - version2 migration) and to delete the values for some_key (in the version2 - version3 migration). What's the appropriate way to do this with MongoDB/MongoMapper? It seems to me that some method of tracking which migrations have been run is still necessary. Does such a thing exist? EDITED: I think people are missing the point of my question. There are times where you want to be able to run a script on a database to change or restructure the data in it. I gave two examples above, one where a new required key was added and one where a key can be removed and space can be reclaimed. How do you manage running these scripts? ActiveRecord migrations give you an easy way to run these scripts and to determine what scripts have already been run and what scripts have not been run. I can obviously write a Mongo script that does any update on the database, but what I'm looking for is a framework like migrations that lets me track which upgrade scripts have already been run.

    Read the article

  • drupal jQuery 1.4 on specific pages

    - by Mark
    I'm looking for a way to force drupal to use 1.4 on specific pages. This is the same as this old question:http://stackoverflow.com/questions/2842792/drupal-jquery-1-4-on-specific-pages It look me a while to try the answer which I marked correct. But because I'm new to module dev overall I couldn't figure it out based on the answer. The code from that answer looked like this: /** * Implementation of hook_theme_registry_alter(). * Based on the jquery_update module. * * Make this page preprocess function runs *last*, * so that a theme can't call drupal_get_js(). */ function MYMODULE_theme_registry_alter(&$theme_registry) { if (isset($theme_registry['page'])) { // See if our preprocess function is loaded, if so remove it. if ($key = array_search('MYMODULE_preprocess_page', $theme_registry['page']['preprocess functions'])) { unset($theme_registry['page']['preprocess functions'][$key]); } // Now add it on at the end of the array so that it runs last. $theme_registry['page']['preprocess functions'][] = 'MYMODULE_preprocess_page'; } } /** * Implementation of moduleName_preprocess_hook(). * Based on the jquery_update module functions. * * Strips out JS and CSS for a path. */ function MYMODULE_preprocess_page(&$variables, $arg = 'my_page', $delta=0) { // I needed a one hit wonder. Can be altered to use function arguments // to increase it's flexibility. if(arg($delta) == $arg) { $scripts = drupal_add_js(); $css = drupal_add_css(); // Only do this for pages that have JavaScript on them. if (!empty($variables['scripts'])) { $path = drupal_get_path('module', 'admin_menu'); unset($scripts['module'][$path . '/admin_menu.js']); $variables['scripts'] = drupal_get_js('header', $scripts); } // Similar process for CSS but there are 2 Css realted variables. // $variables['css'] and $variables['styles'] are both used. if (!empty($variables['css'])) { $path = drupal_get_path('module', 'admin_menu'); unset($css['all']['module'][$path . '/admin_menu.css']); unset($css['all']['module'][$path . '/admin_menu.color.css']); $variables['styles'] = drupal_get_css($css); } } } I need the jquery_update 1.3.2 to be unset on the node-types of 'blog' and 'video'. Can someone help me out? Thank you.

    Read the article

  • SQL version control methodology

    - by Tom H.
    There are several questions on SO about version control for SQL and lots of resources on the web, but I can't find something that quite covers what I'm trying to do. First off, I'm talking about a methodology here. I'm familiar with the various source control applications out there and I'm familiar with tools like Red Gate's SQL Compare, etc. and I know how to write an application to check things in and out of my source control system automatically. If there is a tool which would be particularly helpful in providing a whole new methodology or which have a useful and uncommon functionality then great, but for the tasks mentioned above I'm already set. The requirements that I'm trying to meet are: The database schema and look-up table data are versioned DML scripts for data fixes to larger tables are versioned A server can be promoted from version N to version N + X where X may not always be 1 Code isn't duplicated within the version control system - for example, if I add a column to a table I don't want to have to make sure that the change is in both a create script and an alter script The system needs to support multiple clients who are at various versions for the application (trying to get them all up to within 1 or 2 releases, but not there yet) Some organizations keep incremental change scripts in their version control and to get from version N to N + 3 you would have to run scripts for N-N+1 then N+1-N+2 then N+2-N+3. Some of these scripts can be repetitive (for example, a column is added but then later it is altered to change the data type). We're trying to avoid that repetitiveness since some of the client DBs can be very large, so these changes might take longer than necessary. Some organizations will simply keep a full database build script at each version level then use a tool like SQL Compare to bring a database up to one of those versions. The problem here is that intermixing DML scripts can be a problem. Imagine a scenario where I add a column, use a DML script to fill said column, then in a later version that column name is changed. Perhaps there is some hybrid solution? Maybe I'm just asking for too much? Any ideas or suggestions would be greatly appreciated though. If the moderators think that this would be more appropriate as a community wiki, please let me know. Thanks!

    Read the article

  • BizTalk Cross Reference Data Management Strategy

    - by charlie.mott
    Article Source: http://geekswithblogs.net/charliemott This article describes an approach to the management of cross reference data for BizTalk.  Some articles about the BizTalk Cross Referencing features can be found here: http://home.comcast.net/~sdwoodgate/xrefseed.zip http://geekswithblogs.net/michaelstephenson/archive/2006/12/24/101995.aspx http://geekswithblogs.net/charliemott/archive/2009/04/20/value-vs.id-cross-referencing-in-biztalk.aspx Options Current options to managing this data include: Maintaining xml files in the format that can be used by the out-of-the-box BTSXRefImport.exe utility. Use of user interfaces that have been developed to manage this data: BizTalk Cross Referencing Tool XRef XML Creation Tool However, there are the following issues with the above options: The 'BizTalk Cross Referencing Tool' requires a separate database to manage.  The 'XRef XML Creation' tool has no means of persisting the data settings. The 'BizTalk Cross Referencing tool' generates integers in the common id field. I prefer to use a string (e.g. acme.country.uk). This is more readable. (see naming conventions below). Both UI tools continue to use BTSXRefImport.exe.  This utility replaces all xref data. This can be a problem in continuous integration environments that support multiple clients or BizTalk target instances.  If you upload the data for one client it would destroy the data for another client.  Yet in TFS where builds run concurrently, this would break unit tests. Alternative Approach In response to these issues, I instead use simple SQL scripts to directly populate the BizTalkMgmtDb xref tables combined with a data namepacing strategy to isolate client data. Naming Conventions All data keys use namespace prefixing.  The pattern will be <companyName>.<data Type>.  The naming conventions will be to use lower casing for all items.  The data must follow this pattern to isolate it from other company cross-reference data.  The table below shows some sample data. (Note: this data uses the 'ID' cross-reference tables.  the same principles apply for the 'value' cross-referencing tables). Table.Field Description Sample Data xref_AppType.appType Application Types acme.erp acme.portal acme.assetmanagement xref_AppInstance.appInstance Application Instances (each will have a corresponding application type). acme.dynamics.ax acme.dynamics.crm acme.sharepoint acme.maximo xref_IDXRef.idXRef Holds the cross reference data types. acme.taxcode acme.country xref_IDXRefData.CommonID Holds each cross reference type value used by the canonical schemas. acme.vatcode.exmpt acme.vatcode.std acme.country.usa acme.country.uk xref_IDXRefData.AppID This holds the value for each application instance and each xref type. GBP USD SQL Scripts The data to be stored in the BizTalkMgmtDb xref tables will be managed by SQL scripts stored in a database project in the visual studio solution. File(s) Description Build.cmd A sqlcmd script to deploy data by running the SQL scripts below.  (This can be run as part of the MSBuild process).   acme.purgexref.sql SQL script to clear acme.* data from the xref tables.  As such, this will not impact data for any other company. acme.applicationInstances.sql   SQL script to insert application type and application instance data.   acme.vatcode.sql acme.country.sql etc ...  There will be a separate SQL script to insert each cross-reference data type and application specific values for these types.

    Read the article

  • ASP.NET Performance tip- Combine multiple script file into one request with script manager

    - by Jalpesh P. Vadgama
    We all need java script for our web application and we storing our JavaScript code in .js files. Now If we have more then .js file then our browser will create a new request for each .js file. Which is a little overhead in terms of performance. If you have very big enterprise application you will have so much over head for this. Asp.net Script Manager provides a feature to combine multiple JavaScript into one request but you must remember that this feature will be available only with .NET Framework 3.5 sp1 or higher versions.  Let’s take a simple example. I am having two javascript files Jscrip1.js and Jscript2.js both are having separate functions. //Jscript1.js function Task1() { alert('task1'); } Here is another one for another file. ////Jscript1.js function Task2() { alert('task2'); } Now I am adding script reference with script manager and using this function in my code like this. <form id="form1" runat="server"> <asp:ScriptManager ID="myScriptManager" runat="server" > <Scripts> <asp:ScriptReference Path="~/JScript1.js" /> <asp:ScriptReference Path="~/JScript2.js" /> </Scripts> </asp:ScriptManager> <script language="javascript" type="text/javascript"> Task1(); Task2(); </script> </form> Now Let’s test in Firefox with Lori plug-in which will show you how many request are made for this. Here is output of that. You can see 5 Requests are there. Now let’s do same thing in with ASP.NET Script Manager combined script feature. Like following <form id="form1" runat="server"> <asp:ScriptManager ID="myScriptManager" runat="server" > <CompositeScript> <Scripts> <asp:ScriptReference Path="~/JScript1.js" /> <asp:ScriptReference Path="~/JScript2.js" /> </Scripts> </CompositeScript> </asp:ScriptManager> <script language="javascript" type="text/javascript"> Task1(); Task2(); </script> </form> Now let’s run it and let’s see how many request are there like following. As you can see now we have only 4 request compare to 5 request earlier. So script manager combined multiple script into one request. So if you have lots of javascript files you can save your loading time with this with combining multiple script files into one request. Hope you liked it. Stay tuned for more!!!.. Happy programming.. Technorati Tags: ASP.NET,ScriptManager,Microsoft Ajax

    Read the article

  • Oracle performance problem

    - by jreid42
    We are using an Oracle 11G machine that is very powerful; has redundant storage etc. It's a beast from what I have been told. We just got this DB for a tool that when I first came on as a coop had like 20 people using, now its upwards of 150 people. I am the only one working on it :( We currently have a system in place that distributes PERL scripts across our entire data center essentially giving us a sort of "grid" computing power. The Perl scripts run a sort of simulation and report back the results to the database. They do selects / inserts. The load is not very high for each script but it could be happening across 20-50 systems at the same time. We then have multiple data centers and users all hitting the same database with this same approach. Our main problem with this is that our database is getting overloaded with connections and having to drop some. We sometimes have upwards of 500 connections. These are old perl scripts and they do not handle this well. Essentially they fail and the results are lost. I would rather avoid having to rewrite a lot of these as they are poorly written, and are a headache to even look at. The database itself is not overloaded, just the connection overhead is too high. We open a connection, make a quick query and then drop the connection. Very short connections but many of them. The database team has basically said we need to lower the number of connections or they are going to ignore us. Because this is distributed across our farm we cant implement persistent connections. I do this with our webserver; but its on a fixed system. The other ones are perl scripts that get opened and closed by the distribution tool and thus arent always running. What would be my best approach to resolving this issue? The scripts themselves can wait for a connection to be open. They do not need to act immediately. Some sort of queing system? I've been suggested to set up a few instances of a tool called "SQL Relay". Maybe one in each data center. How reliable is this tool? How good is this approach? Would it work for what we need? We could have one for each data center and relay requests through it to our main database, keeping a pipeline of open persistent connections? Does this make sense? Is there any other suggestions you can make? Any ideas? Any help would be greatly appreciated. Sadly I am just a coop student working for a very big company and somehow all of this has landed all on my shoulders (there is literally nobody to ask for help; its a hardware company, everybody is hardware engineers, and the database team is useless and in India) and I am quite lost as what the best approach would be? I am extremely overworked and this problem is interfering with on going progress and basically needs to be resolved as quickly as possible; preferably without rewriting the whole system, purchasing hardware (not gonna happen), or shooting myself in the foot. HELP LOL!

    Read the article

  • Windows Azure Mobile Services Updates Keep Coming

    - by Clint Edmonson
    Some exciting new Windows Azure Mobile Services features were delivered to production this week. The highlights include: iPhone and iPad connectivity support via a new iOS SDK Integrated Authentication so developers can configure user authentication via Microsoft Account, Facebook, Twitter, and Google. New server-side Mobile Service script modules Access to Structured Storage, Windows Azure Blob, Table, Queues, and ServiceBus Email services through partnership with SendGrid SMS & voice services through partnership with Twilio Mobile Services hosting expanded to west coast US The iOS SDK I’m excited to share that we've announced the release of an under-development iOS client SDK for Windows Azure Mobile Services. The iOS SDK joins the Windows 8 SDK launched with Windows Azure Mobile Services as well as client SDKs released by Xamarin for MonoTouch and MonoDroid.  The native iOS SDK is for developers programming in Objective-C on the iPhone and iPad platforms. The SDK gives developers the same level of access to data storage using dynamic schematization that is available for Windows 8. Also, iOS applications can use the same authentication options available in Mobile Services. While full iOS support is still in development, the libraries are currently available on GitHub. There’s a great getting started tutorial to walk you through building a simple iOS “Todo List” app that stores data in Windows Azure.  These additional tutorials explore how to use the iOS client libraries to store data and authenticate users: Get Started with data in Mobile Services for iOS Get Started with authentication in Mobile Services for iOS What’s New in Authentication Available to both iOS and Windows 8 developers, Mobile Services has expanded its authentication options.  Developers can now use Microsoft, Facebook, Twitter, and Google authentication. Similar to using Microsoft accounts for authentication, developers must sign up and through Facebook, Twitter, or Google's developer portal in order to authenticate through them.  These tutorials walk through how to register your Mobile Service with an identity provider: How to register your app with Microsoft Account How to register your app with Facebook How to register your app with Twitter How to register your app with Google And these tutorials walk through authenticating against Mobile Services: Get started with authentication in Mobile Services for Windows Store (C#) Get started with authentication in Mobile Services for Windows Store (JavaScript) Get started with authentication in Mobile Services for iOS What’s New in Mobile Service Scripts Some great new functionality is now available in the Mobile Service script layer.  These server side scripts are triggered off of any CRUD operation on a Mobile Service's table and can already handle doing data and query validation, filtering, web requests and more.  Today, the Azure SDK module is now available to these scripts giving them access to blob storage, service bus, table storage.  Check out the new tutorials on the Windows Azure Node.js developer center to learn more about working with Blob, Tables, Queues and Service Bus using the azure module. In addition, SendGrid and Twilio are now available via modules that can be called from the scripts as well.  This gives developers the ability to send emails (SendGrid) or SMS text messages (Twilio) whenever a script is fired.  Windows Azure customers receive a special offer of 25,000 free emails per month from SendGrid and 1000 free text messages from Twilio. Expanded Data Center Availability In addition to Mobile Services being available in our US East data center, they can now be spun up in US West. The above features are all now live in production and are available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using Mobile Services today. The Windows Azure Mobile Developer Center has been updated with new tutorials that cover these new features in detail. And don’t forget - Windows Azure Mobile Services are still free for your first ten applications running on shared compute instances. Stay tuned to my twitter feed for Windows Azure announcements, updates, and links: @clinted

    Read the article

  • What is the SharePoint Action Framework and Why do I need it ?

    - by SAF
    For those out there that are a little curious as to whether SAF is any use to your organisation, please read this FAQ.  What is SAF ? SAF is free to use. SAF is the "SharePoint Action Framework", it was built by myself and Hugo (plus a few others along the way). SAF is written entirely in C# code available from : http://saf.codeplex.com.   SAF is a way to automate SharePoint configuration changes. An Action is a command/class/task/script written in C# that performs a unit of execution against SharePoint such as "CreateWeb"  or "AddLookupColumn". A SAF Macro is collection of one or more Actions. SAF Macro can be run from Msbuild, a Feature, StsAdm or common plain old .Net code. Parameters can be passed to a Macro at run-time from a variety of sources such as "Environment Variable", "*.config", "Msbuild Properties", Feature Properties, command line args, .net code. SAF emits lots of trace statements at run-time, these can be viewed using "DebugView". One Action can pass parameters to another Action. Parameters can be set using Expression Syntax such as "DateTime.Now".  You should consider SAF is you suffer from one of the following symptoms... "Our developers write lots of code to deploy changes at release time - it's always rushed" "I don't want my developers shelling out to Powershell or Stsadm from a Feature". "We have loads of Console applications now, I have lost track of where they are, or what they do" "We seem to be writing similar scripts against SharePoint in lots of ways, testing is hard". "My scripts often have lots of errors - they are done at the last minute". "When something goes wrong - I have no idea what went wrong or how to solve it". "Our Features get stuck and bomb out half way through - there no way to roll them back". "We have tons of Features now - I can't keep track". "We deploy Features to run one-off tasks" "We have a library of reusable scripts, but, we can only run it in one way, sometimes we want to run it from MSbuild and a Feature". "I want to automate the deployment of changes to our development environment". "I would like to run a housekeeping task on a scheduled basis"   So I like the sound of SAF - what's the problems ?  Realistically, there are few things that need to be considered: Someone on your team will need to spend a day or 2 understanding SAF and deciding exactly how you want to use it. I would suggest a Tech Lead, SysAdm or SP Architect will need to download it, try out the examples, look through the unit tests. Ask us questions. Although, SAF can be downloaded and set to go in a few minutes, you will still need to address issues such as - "Do you want to execute your Macros in MsBuild or from a Feature ?" You will need to decide who is going to do your deployments - is it each developer to themself, or do you require a dedicated Build Manager ? As most environments (Dev, QA, Live etc) require different settings (e.g. Urls, Database names, accounts etc), you will more than likely want to define these and set a properties file up for each environment. (These can then be injected into Saf at run-time). There may be no Action to solve your particular problem. If this is the case, suggest it to us - we can try and write it, or write it yourself. It's very easy to write a new Action - we have an approach to easily unit test it, document it and author it. For example, I wrote one to deploy  a WSP in 2 hours the other day. Alternatively, Saf can also call Stsadm commands and Powershell scripts.   Anyway, I do hope this helps! If you still need help, or a quick start, we can also offer consultancy around SAF. If you want to know more give us a call or drop an email to [email protected]

    Read the article

  • How can I beta test web Perl modules under Apache/mod_perl on production web server?

    - by DVK
    We have a setup where most code, before being promoted to full production, is deployed in BETA mode - meaning, it runs in full production environment (using production database - usually production data; and production web server). We call that stage BETA testing. One of the main requirements is that BETA code promotion to production must be a simple "cp" command from beta to production directory - no code/filename changes. For non-web Perl code, achieving seamless BETA test is quite doable (see details here): Perl programs live in a standard location under production root (/usr/code/scripts) with production perl modules living under the same root (/usr/code/lib/perl) The BETA code has 100% same code paths except under beta root (/usr/code/beta/) A special module manipulates @INC of any script based on whether the script was called from /usr/code/scripts or /usr/code/test/scripts, to include beta libraries for beta scripts. This setup works fine up till we need to beta test our web Perl code (the setup is EmbPerl and Apache/mod_perl). The hang-up is as follows: if both a production Perl module and BETA Perl module have the same name (e.g. /usr/code/lib/perl/MyLib1.pm and /usr/code/beta/lib/perl/MyLib1.pm), then mod_perl will only be able to load ONE of these modules into memory - and there's no way we are aware of for a particular web page to affect which version of the module is currently loaded due to concurrency issues. Leaving aside the obvious non-programming solution (get a bloody BETA web server) which for political/organizational reasons is not feasible, is there any way we can somehow hack around this problem in either Perl or mod_perl? I played around with various approaches to unloading Perl modules that %INC has listed, but the problem remains that another user might load a beta page at just the right (or rather wrong) moment and have the beta module loaded which will be used for my production page.

    Read the article

  • Validation Summary with JQuery in MVC 2

    - by Nigel Sampson
    I'm trying to get client validation working on my asp.net mvc 2 web application (Visual Studio 2010). The client side validation IS working. However the validation summary is not. I'm including the following scripts <script type="text/javascript" src="../../content/scripts/jquery-1.4.1.js"></script> <script type="text/javascript" src="../../content/scripts/jquery.validate.js"></script> <script type="text/javascript" src="../../content/scripts/MicrosoftMvcJQueryValidation.js"></script> Have this before this form is started <% Html.EnableClientValidation(); %> and inside the form is <%: Html.ValidationSummary("There are some errors to fix.", new { @class = "warning_box" })%> <p> <%: Html.LabelFor(m => m.Name) %><br /> <%: Html.TextBoxFor(m => m.Name) %> <%: Html.ValidationMessageFor(m => m.Name, "*") %> </p> I have that latest version of MicrosoftMvcJQueryValidation.js from the MvcFutures download, but it doesn't look like it supports Validation Summary. I've tried correcting this by setting extra options such as errorContainer and errorLabelContainer, but it looks like there's some more underlying issues with it. Is there an updated / better version of this file around?

    Read the article

  • ASP.NET MVC2: Client-side validation not working with Start.js

    - by Shaggy13spe
    Ok, this is strange. I would hope it's something I'm doing wrong and not that MS has two technologies that simply don't work together. (UPDATE: See bottom of post for Script tag order in HEAD section) I'm trying to use the dataView template and client-side validation. If I include a reference to: <script src="http://ajax.microsoft.com/ajax/beta/0911/Start.js" type="text/javascript"></script> by itself, the dataview template works fine. but if I put in the following references: <script src="http://ajax.microsoft.com/ajax/jquery.validate/1.7/jquery.validate.min.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftAjax.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcAjax.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcValidation.js" type="text/javascript"></script> then I get the following error: > Error: Type._registerScript is not a > function Source File: > http://ajax.microsoft.com/ajax/beta/0911/MicrosoftAjaxTemplates.js > Line: 1 and: > Error: Sys.get("$listings") is null > Source File: > http://localhost:12370/Listings Line: > 76 Here's the code calling the dataview: $(document).ready(function () { LoadMap(); Sys.require([Sys.components.dataView, Sys.scripts.jQuery], function() { $("#listings").dataView(); Sys.get("$listings").set_data(listings.Data); updateMap(listings.Data); }); }); I would really appreciate any help with this one. Thanks! UPDATE: I've tried moving around the order of the last 4 script tags, but to no avail.

    Read the article

  • JavaScript keeps returning ambigious error (in ASP.NET MVC 2.0)

    - by Erx_VB.NExT.Coder
    this is my function (with other lines ive tried/abandoned)... function DoClicked(eNumber) { //obj.style = 'bgcolor: maroon'; var eid = 'cat' + eNumber; //$get(obj).style.backgroundColor = 'maroon'; //var nObj = $get(obj); var nObj = document.getElementById(eid) //alert(nObj.getAttribute("style")); nObj.style.backgroundColor = 'Maroon'; alert(nObj.style.backgroundColor); //nObj.setAttribute("style", "backgroundcolor: Maroon"); }; This error keeps getting returned even after the last line in the function runs: Microsoft JScript runtime error: Sys.ArgumentUndefinedException: Value cannot be undefined. Parameter name: method this function is called with an "OnSuccess" set in my Ajax.ActionLink call (ASP.NET MVC)... anyone any ideas on this? i have these referenced... even when i remove the 'debug' versions for normal versions, i still get an error but the error just has much less information and says 'b' is undefined (probably a ms js library internal variable)... <script src="../../Scripts/MicrosoftAjax.debug.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcValidation.debug.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcAjax.debug.js" type="text/javascript"></script> <script src="../../Scripts/jquery-1.4.1.js" type="text/javascript"></script> also, this is how i am calling the actionlink method: Ajax.ActionLink(item.CategoryName, "SubCategoryList", "Home", New With {.CategoryID = item.CategoryID}, New AjaxOptions With {.UpdateTargetId = "SubCat", .HttpMethod = "Post", .OnSuccess = "DoClicked(" & item.CategoryID.ToString & ")"}, New With {.id = "cat" & item.CategoryID.ToString})

    Read the article

  • Named pipe blocking with user nobody

    - by dnagirl
    I have 2 short scripts. The first, an awk script, processes a large file and prints to a named pipe 'myfifo.dat'. The second, a Perl script, runs a LOAD DATA LOCAL INFILE 'myfifo.dat'... command. Both of these scripts work when run locally like so: lee.awk big.file & lee.pl However, when I call these scripts from a PHP webpage, the named pipe blocks: $awk="/path/to/lee.awk {$_FILES['uploadfile']['tmp_name']} &"; $sql="/path/to/lee.pl"; if(!exec($awk,$return,$err)) throw new ZException(print_r($err,true)); //blocks here if(!exec($sql,$return,$err)) throw new ZException(print_r($err,true)); If I modify the awk and Perl scripts so that they write and read to a normal file, everything works fine from PHP. The permissions on the fifo and the normal file are 666 (for testing purposes). These operations run much more quickly through a named pipe, so I'd prefer to use one. Any ideas how to unblock it? ps. In case you're wondering why I'm going to all this aggravation, see this SO question.

    Read the article

  • Error Ant Build/deploy to websphere 7.0

    - by adisembiring
    Hi I'm trying to build/deploy war to websphere process server 7.0. and I run on windows environment. I use http://illegalargumentexception.blogspot.com/2008/08/ant-automated-deployment-to-websphere.html as my reference. and http://illegalargumentexception.googlecode.com/svn/trunk/code/java/WebSphereAntFiles/ as my sample code to deployed. this is my buil.properies is ? #build properties mywebappear=D:/data/code/WebSphereAntFiles/scripts/test/mywebappEAR.ear #WAS6 install directory was_home=C:/IBM/WID7_WTE/runtimes/bi_v7 #server name (see cell/node/server; e.g. "server1") was_server=server1 #user + password; for use when security is enabled was_user=admin was_password=admin #stops scripts on problem was_failonerror=true #virtual host was_virtualhost=default_host #Absolute path to EAR file #was_ear=fooEAR.ear #Name of the enterprise application #was_appname=fooEAR this is my console while I trying to build with ws_ant.bat [wsDefaultBindings] mywebapp.war [wsDefaultBindings] <virtual-host> --> default_host [wsDefaultBindings] [wsDefaultBindings] ------------------------ [wsDefaultBindings] Saving EAR File to directory [wsDefaultBindings] Saved EAR File to directory Successfully test_wsStartServer: WAS_wsStartServer: depCheck: depCheck: [startServer] ADMU0116I: Tool information is being logged in file [startServer] C:\IBM\WID7_WTE\runtimes\bi_v7\profiles\qwps\logs\server1\startServer.log [startServer] ADMU0128I: Starting tool with the qwps profile [startServer] ADMU3100I: Reading configuration for server: server1 [startServer] ADMU3028I: Conflict detected on port 8880. Likely causes: a) An instance of [startServer] the server server1 is already running b) some other process is [startServer] using port 8880 [startServer] ADMU3027E: An instance of the server may already be running: server1 [startServer] ADMU0111E: Program exiting with error: [startServer] com.ibm.websphere.management.exception.AdminException: ADMU3027E: An [startServer] instance of the server may already be running: server1 [startServer] ADMU1211I: To obtain a full trace of the failure, use the -trace option. [startServer] ADMU0211I: Error details may be seen in the file: [startServer] C:/IBM/WID7_WTE/runtimes/bi_v7/profiles/qwps\logs\server1\startServer.log BUILD FAILED D:\data\code\WebSphereAntFiles\scripts\test\build.xml:68: The following error occurred while executing this line: D:\data\code\WebSphereAntFiles\scripts\was\wsStartServer.xml:49: Java returned: -1

    Read the article

  • python / sqlite - database locked despite large timeouts

    - by Chris Phillips
    Hi, I'm sure I'm missing something pretty obvious, but I can't for the life of me stop my pysqlite scripts crashing out with a database is locked error. I have two scripts, one to load data into the database, and one to read data out, but both will frequently, and instantly, crash depending on what the other is doing with the database at any given time. I've got the timeout on both scripts set to 30 seconds: cx = sqlite.connect("database.sql", timeout=30.0) and think I can see some evidence of the timeouts in that i get what appears to be a timing stamp (e.g 0.12343827e-06 0.1 - and how do I stop that being printed?) dumped occasionally in the middle of my curses formatted output screen, but no delay that ever gets remotely near the 30 second timeout, but still one of the other keeps crashing again and again from this. I'm running RHEL5.4 on a 64 bit 4 cpu HS21 IBM blade, and have heard some mention about issues about multi-threading and am not sure if this might be relevant. Packages in use are sqlite-3.3.6-5 and python-sqlite-1.1.7-1.2.1, and upgrading to newer versions outside of RedHat's official provisions is not a great option for me. Possible, but not desirable due to the environment in general. I have had autocommit=1 on previously on both scripts, but have since disabled on both, and am now cx.commit()ing on the inserting script and not committing on the select script. Ultimately as I only ever have one script actually making any modifications, I don't really see why this locking should ever ever happen. I have noticed that this is significantly worse over time when the database has gotten larger. It was recently at 13mb with 3 equal sized tables, which was about 1 day's worth of data. creating a new file has significantly improved this, which seems understandable, but the timeout ultimately just doesn't seem to be being obeyed. Any pointers very much appreciated. Thanks Chris

    Read the article

  • JavaScript keeps returning ambigious error

    - by Erx_VB.NExT.Coder
    this is my function (with other lines ive tried/abandoned)... function DoClicked(eNumber) { //obj.style = 'bgcolor: maroon'; var eid = 'cat' + eNumber; //$get(obj).style.backgroundColor = 'maroon'; //var nObj = $get(obj); var nObj = document.getElementById(eid) //alert(nObj.getAttribute("style")); nObj.style.backgroundColor = 'Maroon'; alert(nObj.style.backgroundColor); //nObj.setAttribute("style", "backgroundcolor: Maroon"); }; This error keeps getting returned even after the last line in the function runs: Microsoft JScript runtime error: Sys.ArgumentUndefinedException: Value cannot be undefined. Parameter name: method this function is called with an "OnSuccess" set in my Ajax.ActionLink call (ASP.NET MVC)... anyone any ideas on this? i have these referenced... even when i remove the 'debug' versions for normal versions, i still get an error but the error just has much less information and says 'b' is undefined (probably a ms js library internal variable)... <script src="../../Scripts/MicrosoftAjax.debug.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcValidation.debug.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcAjax.debug.js" type="text/javascript"></script> <script src="../../Scripts/jquery-1.4.1.js" type="text/javascript"></script> also, this is how i am calling the actionlink method: Ajax.ActionLink(item.CategoryName, "SubCategoryList", "Home", New With {.CategoryID = item.CategoryID}, New AjaxOptions With {.UpdateTargetId = "SubCat", .HttpMethod = "Post", .OnSuccess = "DoClicked(" & item.CategoryID.ToString & ")"}, New With {.id = "cat" & item.CategoryID.ToString})

    Read the article

  • Using themed css files requires a header control on the page. (e.g. <head runat="server" />).

    - by wide
    in local it works. when i load server, i got this error. Using themed css files requires a header control on the page. (e.g. <head runat="server" />). Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: Using themed css files requires a header control on the page. (e.g. <head runat="server" />). Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [InvalidOperationException: Using themed css files requires a header control on the page. (e.g. <head runat="server" />).] System.Web.UI.PageTheme.SetStyleSheet() +2458406 System.Web.UI.Page.OnInit(EventArgs e) +8699420 System.Web.UI.Control.InitRecursive(Control namingContainer) +333 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +378 -------------------------------------album1.aspx------------------ <%@ Page Language="C#" MasterPageFile="~/MasterPage.master" AutoEventWireup="true" CodeFile="album1.aspx.cs" Inherits="album1" Title="Özkan Köylü | Albüm" %> <asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server"> <script src="scripts/Silverlight.js" type="text/javascript"></script> <script src="scripts/Default_html.js" type="text/javascript"></script> <script src="scripts/Page.xaml.js" type="text/javascript"></script> <script src="scripts/gallery.js" type="text/javascript"></script> <div align="center" id="SilverlightControlHost"> <script type="text/javascript"> createSilverlight(); </script> </div> </asp:Content>

    Read the article

  • How to correctly load dependent JavaScript files

    - by Vaibhav Garg
    I am trying to extent a website page that displays google maps with the LabeledMarker. Google Maps API defines a class called GMarker which is extended by the LabeledMarker. The problem is, I cant seem to load the LabeledMarker script properly, i.e. after the Google API loads and I get the 'GMarker not defined' error. What is the correct way to specify the scripts in such cases? I am using ASP.NET's ClientScript.RegisterClientScriptInclude() first for the google API url and then immediately after with the LabeledMarker script file. The initial google API loader writes further script links that load the actual GMarker class. Shouldnt all those scripts be executed before the next script block(LabeledMarker script) is processed. I have checked the generated HTML and the script blocks are emitted in the right order. <script src="google api url" type="text/javascript"></script> ... (the above scripts uses document.write() etc to append further script blocks/sources) ... <script src="Scripts/LabeledMarker.js" type="text/javascript"></script> Once again, the LabeledMarker.js seems to get executed before the google API finishes loading.

    Read the article

  • ASP.NET MVC3 - Bug using Javascript

    - by ebb
    Hey there, I'm trying to use Ajax.BeginForm() to POST A Json result from my controller (I'm using MVC3). When the Json result is called it should be sent to a javascript function and extract the object using "var myObject = content.get_response().get_object();", However it just throws a "Microsoft JScript runtime error: Object doesn't support this property or method" when trying to invoke the Ajax POST. My code: Controller: [HttpPost] public ActionResult Index(string message) { return Json(new { Success = true, Message = message }); } View: <!DOCTYPE html> <html> <head> <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/MicrosoftAjax.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/MicrosoftMvcAjax.js")" type="text/javascript"></script> <script type="text/javascript"> function JsonAdd_OnComplete(mycontext) { var myObject = mycontext.get_response().get_object(); alert(mycontext.Message); } </script> </head> <body> <div> @using(Ajax.BeginForm("Index", "Home", new AjaxOptions() { HttpMethod = "POST", OnComplete = "JsonAdd_OnComplete" })) { @Html.TextBox("message") <input type="submit" value="SUBMIT" /> } </div> </body> </html> The strange thing is that the exactly same code works in MVC2 - Is this a bug, or have I forgot something? Thanks in advance.

    Read the article

  • ScriptManager duplicates javascript

    - by Andreas
    Hi! I have a usercontrol that can be used in for example a gridview itemtemplate, this means that the control might or might not be on the page at page load. In the case where the control is inside an itemtemplate i will popupate the gridview via asyncronous postbacks (via updatepanels). The control itselfs registrers scriptblocks since it is depending on javascripts. First i used Page.ClientScript.RegistrerClientScriptBlock But this doesn't work on asyncronous postbacks (updatepanels) so i then tried the same using ScriptManager which allows me to registrer scripts on the page after async postbacks. great!. ScriptManager.RegisterClientScriptBlock However, ScriptManager (what i know of) does not have the functionallity to see if a script already is on the page, so i will for every postback generate duplicates of the script blocks, this is ofcourse unwanted behaviour. I did a run at Google and found that i can call the Dispose() method of the PageRequestManager can be used, this does work since it clears the scripts and then adding them again (this also solves my issue with removing unused script blocks from removed controls). Sys.WebForms.PageRequestManager.getInstance().Dispose() However, ofcourse there is a downside since im posting here :). The Dispose() method disposes the instance on the master page as well which leads to scripts running there will stop to function after an async postback (updateprogress for example). So, is there a way to check if a script already exists on the page using ScriptManager or any other tools, that will prevent me of inserting duplicate scripts? Also, is there a way to remove certain script blocks (when i am removing an item in itemtemplate for example). Big thanks in advance.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >