Search Results

Search found 9083 results on 364 pages for 'startup scripts'.

Page 98/364 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Sweet and Sour Source Control

    - by Tony Davis
    Most database developers don't use Source Control. A recent anonymous poll on SQL Server Central asked its readers "Which Version Control system do you currently use to store you database scripts?" The winner, with almost 30% of the vote was...none: "We don't use source control for database scripts". In second place with almost 28% of the vote was Microsoft's VSS. VSS? Given its reputation for being buggy, unstable and lacking most of the basic features required of a proper source control system, answering VSS is really just another way of saying "I don't use Source Control". At first glance, it's a surprising thought. You wonder how database developers can work in a team and find out what changed, when the system worked before but is now broken; to work out what happened to their changes that now seem to have vanished; to roll-back a mistake quickly so that the rest of the team have a functioning build; to find instantly whether a suspect change has been deployed to production. Unfortunately, the survey didn't ask about the scale of the database development, and correlate the two questions. If there is only one database developer within a schema, who has an automated approach to regular generation of build scripts, then the need for a formal source control system is questionable. After all, a database stores far more about its metadata than a traditional compiled application. However, what is meat for a small development is poison for a team-based development. Here, we need a form of Source Control that can reconcile simultaneous changes, store the history of changes, derive versions and builds and that can cope with forks and merges. The problem comes when one borrows a solution that was designed for conventional programming. A database is not thought of as a "file", but a vast, interdependent and intricate matrix of tables, indexes, constraints, triggers, enumerations, static data and so on, all subtly interconnected. It is an awkward fit. Subversion with its support for merges and forks, and the tolerance of different work practices, can be made to work well, if used carefully. It has a standards-based architecture that allows it to be used on all platforms such as Windows Mac, and Linux. In the words of Erland Sommerskog, developers should "just do it". What's in a database is akin to a "binary file", and the developer must work only from the file. You check out the file, edit it, and save it to disk to compile it. Dependencies are validated at this point and if you've broken anything (e.g. you renamed a column and broke all the objects that reference the column), you'll find out about it right away, and you'll be forced to fix it. Nevertheless, for many this is an alien way of working with SQL Server. Subversion is the powerhouse, not the GUI. It doesn't work seamlessly with your existing IDE, and that usually means SSMS. So the question then becomes more subtle. Would developers be less reluctant to use a fully-featured source (revision) control system for a team database development if they had a turn-key, reliable system that fitted in with their existing work-practices? I'd love to hear what you think. Cheers, Tony.

    Read the article

  • SQL Constraints &ndash; CHECK and NOCHECK

    - by David Turner
    One performance issue i faced at a recent project was with the way that our constraints were being managed, we were using Subsonic as our ORM, and it has a useful tool for generating your ORM code called SubStage – once configured, you can regenerate your DAL code easily based on your database schema, and it can even be integrated into your build as a pre-build event if you want to do this.  SubStage also offers the useful feature of being able to generate DDL scripts for your entire database, and can script your data for you too. The problem came when we decided to use the generate scripts feature to migrate the database onto a test database instance – it turns out that the DDL scripts that it generates include the WITH NOCHECK option, so when we executed them on the test instance, and performed some testing, we found that performance wasn’t as expected. A constraint can be disabled, enabled but not trusted, or enabled and trusted.  When it is disabled, data can be inserted that violates the constraint because it is not being enforced, this is useful for bulk load scenarios where performance is important.  So what does it mean to say that a constraint is trusted or not trusted?  Well this refers to the SQL Server Query Optimizer, and whether it trusts that the constraint is valid.  If it trusts the constraint then it doesn’t check it is valid when executing a query, so the query can be executed much faster. Here is an example base in this article on TechNet, here we create two tables with a Foreign Key constraint between them, and add a single row to each.  We then query the tables: 1 DROP TABLE t2 2 DROP TABLE t1 3 GO 4 5 CREATE TABLE t1(col1 int NOT NULL PRIMARY KEY) 6 CREATE TABLE t2(col1 int NOT NULL) 7 8 ALTER TABLE t2 WITH CHECK ADD CONSTRAINT fk_t2_t1 FOREIGN KEY(col1) 9 REFERENCES t1(col1) 10 11 INSERT INTO t1 VALUES(1) 12 INSERT INTO t2 VALUES(1) 13 GO14 15 SELECT COUNT(*) FROM t2 16 WHERE EXISTS17 (SELECT *18 FROM t1 19 WHERE t1.col1 = t2.col1) This all works fine, and in this scenario the constraint is enabled and trusted.  We can verify this by executing the following SQL to query the ‘is_disabled’ and ‘is_not_trusted’ properties: 1 select name, is_disabled, is_not_trusted from sys.foreign_keys This gives the following result: We can disable the constraint using this SQL: 1 alter table t2 NOCHECK CONSTRAINT fk_t2_t1 And when we query the constraints again, we see that the constraint is disabled and not trusted: So the constraint won’t be enforced and we can insert data into the table t2 that doesn’t match the data in t1, but we don’t want to do this, so we can enable the constraint again using this SQL: 1 alter table t2 CHECK CONSTRAINT fk_t2_t1 But when we query the constraints again, we see that the constraint is enabled, but it is still not trusted: This means that the optimizer will check the constraint each time a query is executed over it, which will impact the performance of the query, and this is definitely not what we want, so we need to make the constraint trusted by the optimizer again.  First we should check that our constraints haven’t been violated, which we can do by running DBCC: 1 DBCC CHECKCONSTRAINTS (t2) Hopefully you see the following message indicating that DBCC completed without finding any violations of your constraint: Having verified that the constraint was not violated while it was disabled, we can simply execute the following SQL:   1 alter table t2 WITH CHECK CHECK CONSTRAINT fk_t2_t1 At first glance this looks like it must be a typo to have the keyword CHECK repeated twice in succession, but it is the correct syntax and when we query the constraints properties, we find that it is now trusted again: To fix our specific problem, we created a script that checked all constraints on our tables, using the following syntax: 1 ALTER TABLE t2 WITH CHECK CHECK CONSTRAINT ALL

    Read the article

  • Administer, manage, monitor, and fine tune the performance of your Oracle SOA Suite 11g Service Infrastructure and SOA composite applications.

    - by JuergenKress
    Key Features of the book If you are an Oracle SOA suite administrator, then this book is your bible. It gives you everything you need to know about all your tasks and help you to apply what you learn in your everyday life right from the first chapter. The book walks through promoting code across environments, performance tuning the service infrastructure, monitoring the environment, configuring security policies, managing the dehydration store, backing and restoring environments and so on. Packed with real-world examples from authors' own experiences, this books offers a unique insight into Oracle SOA Suite Administration. Detailed description The book begins with an introduction of SOA and quickly moves on to management of SOA composite applications. Readers will learn how to manage composite applications, their deployments and lifecycles. Equipped with this knowledge, readers will be introduced to monitoring and performance tuning SOA Suite, monitoring instances, messages, and composite applications, managing faults and exceptions, configuring audit levels of composite applications to include end-to-end monitoring through the use of extended logging as well as administering and configuring all SOA Suite components. A very important aspect of administration is tuning and optimizing the infrastructure for performance and book offers real work recommendations to monitor and performance tune service engines, the underlying WebLogic server, threads and timeouts, files systems, and composite applications. It also covers detailed administration of individual service components, configuring the infrastructure MBeans using both Oracle Enterprise Manager Fusion Middleware Control and WLST based scripts, migrating worklist preferences and BAM data across environments, setting up Email, LDAP and custom XPath. An administrator is always trusted with troubleshooting and root causing problems in the infrastructure and this book will help you through the troubleshooting approaches as how to identify faults and exception through extended logging and thread dumps and find solutions to common startup problems and deployment issues. The advanced contents of this book explains OWSM security framework and how to secure components deployed to the infrastructure along with the details of all groundwork needed to ready the environment. Last few chapters help you to understand and deal with managing the metadata services repository and dehydration store, backup and recovery and concluding with advanced topics such as silent/scripted installations, cloning, upgrading, patching and high availability installations. Packed with real-world examples, and tips straight from the trench; this book offers insights into SOA Suite administration that you will not find elsewhere. Part of our writing style in this book draws heavily on the philosophy of reuse and as such the book provide an ample of executable SQL queries and WLST scripts that administrators can reuse and extend to perform most of the administration tasks such as monitoring instances, processing times, instance states and perform automatic deployments, tuning, migration, and installation. These scripts are spread over each of the chapters in the book and can also be downloaded from here. The book is available in different formats at the following websites: Paperback and eBook versions & Kindle version. It is available for order and signed copies are available through our web site. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA book,SOA Suite Adminsitration,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Why don't we just fix Javascript?

    - by Jan Meyer
    Javascript sucks because of a few fatalities well pointed out by Douglas Crockford. We talk a lot about it. But the point here is, why we don't fix it? Coffeescript of course does that and a lot more. But the question here is another: if we provide a webservice that can convert one version of Javascript to the next, and so on, we can keep the language up to date. Such a conversion allows old code to run, albeit with an ever-increasing startup delay, as newer browsers convert old code to the new syntax. To avoid that delay, the site only needs to take the output of the code-transform and paste it in! The effort has immediate benefits for those businesses interested in the results. The rest can sleep tight: their code will continue to run. If we provide backward code-transformation also, then elder browsers can also run ANY new code! Migration scripts should be created by those that make changes to a language. Today they don't, which is in itself a fundamental omission! It should be am obvious part of their job to provide them, as their job isn't really done without them. The onus of making it work should be on them. With this system Any site will be able to run in Any browser, but new code will run best on the newest browsers. This way we reap the benefit of an up-to-date and productive development environment, where today we suffer, supposedly because of yesterday. This is a misconception. We are all trapped in committee-thinking, and we drag along things that only worsen our performance over time! We cause an ever increasing complexity that is hard to underestimate. Javascript is easily fixed. The fact is we don't. As an example, I have seen Patrick Michaud tackle the migration problem in PmWiki. It included forward migration scripts. Whenever syntax changes were made, a migration script was added to transform pages to the new syntax. As far as I know, ALL migrations have worked flawlessly. In other words, we don't tackle the migration problem, we just drag it along. We are incompetent! And why is that? Because technically incompetent people feel they must decide for us. Because they are incompetent, fear rules them. They are obnoxiously conservative, and we suffer the consequence of bad leadership. But the competent don't need to play by the same rules. They can (and must) change them. They are the path forward. It is about time to leave the past behind, and pursue the leanest meanest, no, eternal functionality. That would in and of itself revolutionize programming. So, why don't we stop whining and fix programming? Begin with Javascript and change the world. Even if the browser doesn't hook into this system, coders could. So language updaters should take it upon them to provide migration scripts. Once they exist, browsers may take advantage of them.

    Read the article

  • Inside Red Gate - Exercising Externally

    - by simonc
    Over the next few weeks, we'll be performing experiments on SmartAssembly to confirm or refute various hypotheses we have about how people use the product, what is stopping them from using it to its full extent, and what we can change to make it more useful and easier to use. Some of these experiments can be done within the team, some within Red Gate, and some need to be done on external users. External testing Some external testing can be done by standard usability tests and surveys, however, there are some hypotheses that can only be tested by building a version of SmartAssembly with some things in the UI or implementation changed. We'll then be able to look at how the experimental build is used compared to the 'mainline' build, which forms our baseline or control group, and use this data to confirm or refute the relevant hypotheses. However, there are several issues we need to consider before running experiments using separate builds: Ideally, the user wouldn't know they're running an experimental SmartAssembly. We don't want users to use the experimental build like it's an experimental build, we want them to use it like it's the real mainline build. Only then will we get valid, useful, and informative data concerning our hypotheses. There's no point running the experiments if we can't find out what happens after the download. To confirm or refute some of our hypotheses, we need to find out how the tool is used once it is installed. Fortunately, we've applied feature usage reporting to the SmartAssembly codebase itself to provide us with that information. Of course, this then makes the experimental data conditional on the user agreeing to send that data back to us in the first place. Unfortunately, even though this does limit the amount of useful data we'll be getting back, and possibly skew the data, there's not much we can do about this; we don't collect feature usage data without the user's consent. Looks like we'll simply have to live with this. What if the user tries to buy the experiment? This is something that isn't really covered by the Lean Startup book; how do you support users who give you money for an experiment? If the experiment is a new feature, and the user buys a license for SmartAssembly based on that feature, then what do we do if we later decide to pivot & scrap that feature? We've either got to spend time and money bringing that feature up to production quality and into the mainline anyway, or we've got disgruntled customers. Either way is bad. Again, there's not really any good solution to this. Similarly, what if we've removed some features for an experiment and a potential new user downloads the experimental build? (As I said above, there's no indication the build is an experimental build, as we want to see what users really do with it). The crucial feature they need is missing, causing a bad trial experience, a lost potential customer, and a lost chance to help the customer with their problem. Again, this is something not really covered by the Lean Startup book, and something that doesn't have a good solution. So, some tricky issues there, not all of them with nice easy answers. Turns out the practicalities of running Lean Startup experiments are more complicated than they first seem! Cross posted from Simple Talk.

    Read the article

  • ??AMDU?????MOUNT?DISKGROUP???????

    - by Liu Maclean(???)
    AMDU?ORACLE??ASM??????????,????ASM Metadata Dump Utility(AMDU) AMDU??????????: 1. ?ASM DISK?????????????????2. ?ASM?????????????OS????,Diskgroup??mount??3. ????????,???C?????16????? ?????????AMDU??ASM DISKGROUP??????; ASM???????????????, ?????????????,?????????ASM????? ??DISKGROUP??MOUNT????????????????????????? AMDU???????, ????????ASM DISKGROUP ??MOUNT???????,???RDBMS?????ASM??????? ?? AMDU???11g??????,?????10g?ASM ???? ???????????, ORACLE DATABASE?SPFILE?CONTROLFILE?DATAFILE????ASM DISKGROUP?,?????ASM ORA-600??????MOUNT?DISKGROUP, ???????AMDU??????ASM DISK?????? ?? 1 ??? ??SPFILE?CONTROLFILE?DATAFILE ????: ???????SPFILE ,????SPFILE??PFILE???,?????????????control_files??? SQL> show parameter control_files NAME TYPE VALUE———————————— ———– ——————————control_files string +DATA/prodb/controlfile/current.260.794687955, +FRA/prodb/controlfile/current.256.794687955 ??control_files ?????ASM???????????,+DATA/prodb/controlfile/current.260.794687955 ?? 260????????+DATA ??DISKGROUP??FILE NUMBER ???????ASM DISK?DISCOVERY PATH??,??????ASM?SPFILE??asm_diskstring ???? [oracle@mlab2 oracle.SupportTools]$ unzip amdu_X86-64.zipArchive: amdu_X86-64.zipinflating: libskgxp11.soinflating: amduinflating: libnnz11.soinflating: libclntsh.so.11.1 [oracle@mlab2 oracle.SupportTools]$ export LD_LIBRARY_PATH=./ [oracle@mlab2 oracle.SupportTools]$ ./amdu -diskstring ‘/dev/asm*’ -extract data.260amdu_2009_10_10_20_19_17/AMDU-00204: Disk N0006 is in currently mounted diskgroup DATAAMDU-00201: Disk N0006: ‘/dev/asm-disk10'AMDU-00204: Disk N0003 is in currently mounted diskgroup DATAAMDU-00201: Disk N0003: ‘/dev/asm-disk5'AMDU-00204: Disk N0002 is in currently mounted diskgroup DATAAMDU-00201: Disk N0002: ‘/dev/asm-disk6' [oracle@mlab2 oracle.SupportTools]$ cd amdu_2009_10_10_20_19_17/[oracle@mlab2 amdu_2009_10_10_20_19_17]$ lsDATA_260.f report.txt[oracle@mlab2 amdu_2009_10_10_20_19_17]$ ls -ltotal 9548-rw-r–r– 1 oracle oinstall 9748480 Oct 10 20:19 DATA_260.f-rw-r–r– 1 oracle oinstall 9441 Oct 10 20:19 report.txt ???????DATA_260.f ??????,?????????startup mount RDBMS??: SQL> alter system set control_files=’/opt/oracle.SupportTools/amdu_2009_10_10_20_19_17/DATA_260.f’ scope=spfile; System altered. SQL> startup force mount;ORACLE instance started. Total System Global Area 1870647296 bytesFixed Size 2229424 bytesVariable Size 452987728 bytesDatabase Buffers 1409286144 bytesRedo Buffers 6144000 bytesDatabase mounted. SQL> select name from v$datafile; NAME——————————————————————————–+DATA/prodb/datafile/system.256.794687873+DATA/prodb/datafile/sysaux.257.794687875+DATA/prodb/datafile/undotbs1.258.794687875+DATA/prodb/datafile/users.259.794687875+DATA/prodb/datafile/example.265.794687995+DATA/prodb/datafile/mactbs.267.794688457 6 rows selected. startup mount???,???v$datafile????????,????????DISKGROUP??FILE NUMBER ???./amdu -diskstring ‘/dev/asm*’ -extract ???? ??????????? [oracle@mlab2 oracle.SupportTools]$ ./amdu -diskstring ‘/dev/asm*’ -extract data.256amdu_2009_10_10_20_22_21/AMDU-00204: Disk N0006 is in currently mounted diskgroup DATAAMDU-00201: Disk N0006: ‘/dev/asm-disk10'AMDU-00204: Disk N0003 is in currently mounted diskgroup DATAAMDU-00201: Disk N0003: ‘/dev/asm-disk5'AMDU-00204: Disk N0002 is in currently mounted diskgroup DATAAMDU-00201: Disk N0002: ‘/dev/asm-disk6' [oracle@mlab2 oracle.SupportTools]$ cd amdu_2009_10_10_20_22_21/[oracle@mlab2 amdu_2009_10_10_20_22_21]$ lsDATA_256.f report.txt[oracle@mlab2 amdu_2009_10_10_20_22_21]$ dbv file=DATA_256.f DBVERIFY: Release 11.2.0.3.0 – Production on Sat Oct 10 20:23:12 2009 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. DBVERIFY – Verification starting : FILE = /opt/oracle.SupportTools/amdu_2009_10_10_20_22_21/DATA_256.f DBVERIFY – Verification complete Total Pages Examined : 90880Total Pages Processed (Data) : 59817Total Pages Failing (Data) : 0Total Pages Processed (Index): 12609Total Pages Failing (Index): 0Total Pages Processed (Other): 3637Total Pages Processed (Seg) : 1Total Pages Failing (Seg) : 0Total Pages Empty : 14817Total Pages Marked Corrupt : 0Total Pages Influx : 0Total Pages Encrypted : 0Highest block SCN : 1125305 (0.1125305)

    Read the article

  • ASP.NET MVC tries to load older version of Owin assembly

    - by d_mcg
    As a bit of context, I'm developing an ASP.NET MVC 5 application that uses OAuth-based authentication via Microsoft's OWIN implementation, for Facebook and Google only at this stage. Currently (as of v3.0.0, git-commit 4932c2f), the FacebookAuthenticationOptions and GoogleOAuth2AuthenticationOptions don't provide any property to force Facebook nor Google respectively to reauthenticate users (via appending the appropriate query string parameters) when signing in. Initially, I set out to override the following classes: FacebookAuthenticationOptions GoogleOAuth2AuthenticationOptions FacebookAuthenticationHandler (specifically AuthenticateCoreAsync()) GoogleOAuth2AuthenticationHandler (specifically AuthenticateCoreAsync()) yet discovered that the ~AuthenticationHandler classes are marked as internal. So I pulled a copy of the source for the Katana project (http://katanaproject.codeplex.com/) and modified the source accordingly. After compiling, I found that there are several dependencies that needed updating in order to use these updated assemblies (Microsoft.Owin.Security.Facebook and Microsoft.Owin.Security.Google) in the MVC project: Microsoft.Owin Microsoft.Owin.Security Microsoft.Owin.Security.Cookies Microsoft.Owin.Security.OAuth Microsoft.Owin.Host.SystemWeb This was done by replacing the existing project references to the 3.0.0 versions and updating those in web.config. Good news: the project compiles successfully. In debugging, I received an exception on startup: An exception of type 'System.IO.FileLoadException' occurred in [MVC web assembly].dll but was not handled in user code Additional information: Could not load file or assembly 'Microsoft.Owin.Security, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) The underlying exception indicated that Microsoft.AspNet.Identity.Owin was trying to load v2.1.0 of Microsoft.Owin.Security when calling app.UseExternalSignInCookie() from Startup.ConfigureAuth(IAppBuilder app) in Startup.Auth.cs. Unfortunately that assembly (and its other dependency, Microsoft.AspNet.Identity.Owin) aren't part of the Project Katana solution, and I can't find any accessible repository for these assemblies online. Are the Microsoft.AspNet.Identity assemblies open source, like the Katana project? Is there a way to fool those assemblies to use the referenced v3.0.0 assemblies instead of v2.1.0? The /bin folder contains the 3.0.0 versions of the Owin assemblies. I've upgraded the NuGet packages for Microsoft.AspNet.Identity.Owin, and this is still an issue. Any ideas on how to resolve this issue?

    Read the article

  • ASP.NET PowerShell Impersonation

    - by Ben
    I have developed an ASP.NET MVC Web Application to execute PowerShell scripts. I am using the VS web server and can execute scripts fine. However, a requirement is that users are able to execute scripts against AD to perform actions that their own user accounts are not allowed to do. Therefore I am using impersonation to switch the identity before creating the PowerShell runspace: Runspace runspace = RunspaceFactory.CreateRunspace(config); var currentuser = WindowsIdentity.GetCurrent().Name; if (runspace.RunspaceStateInfo.State == RunspaceState.BeforeOpen) { runspace.Open(); } I have tested using a domain admin account and I get the following exception when calling runspace.Open(): Security Exception Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Requested registry access is not allowed. The web application is running in full trust and I have explicitly added the account I am using for impersonation to the local administrators group of the machine (even though the domain admins group was already there). I'm using advapi32.dll LogonUser call to perform the impersonation in a similar way to this post (http://blogs.msdn.com/webdav_101/archive/2008/09/25/howto-calling-exchange-powershell-from-an-impersonated-thead.aspx) Any help appreciated as this is a bit of a show stopper at the moment. Thanks Ben

    Read the article

  • JQuery Thickbox - Can't get it to display

    - by Ali
    Hi All, I am using JQuery for the first time today. I can't make it to work (It simple doesnt show up). I want to display inline content using Thickbox (Eventually I will be displaying a PDF in an iframe). I have included all the javascript and css files etc. and referenced them as in code below. <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="JQueryLearning._Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Untitled Page</title> <script src="Scripts/thickbox.js" type="text/javascript"></script> <script src="Scripts/thickbox-compressed.js" type="text/javascript"></script> <script src="Scripts/jquery-1.4.2.min.js" type="text/javascript"></script> <link href="App_Themes/Theme/Css/thickbox.css" rel="stylesheet" type="text/css" /> </head> <body> <form id="form1" runat="server"> <div> <a href="#TB_inline?height=50&width=300&inlineId=hiddenModalContent" title="Simple Demo" class="thickbox">Show hidden content.</a> <div id="hiddenModalContent" style="display: none;"> <div style="text-align: center;"> Hello ThickBox!</div> </div> </div> </form> </body> </html> Am I missing something? Thanks, Ali

    Read the article

  • jQuery "Microsoft JScript runtime error: Object expected"

    - by Oskar Kjellin
    I have the below code that does not seem to work at all :( I keep getting: Microsoft JScript runtime error: Object expected The error seems to occur when the timeout is done. So if I raise the timeout with 10 seconds the error holds for another 10 seconds. I want to be able to update the number of friends online async. The number is shown with the following html: <a href="" id="showChat" >Friends online <strong id="friendsOnline">(?)</strong></a> The friends part is set at the first run, but when the timeout calls back it does not fire again. Also, I cannot see on which line the error occurs because if I want to break on the error it just shows "no source code" etc. The code below is the code I'm using. Thanks! <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.3.2.js" type="text/javascript"></script> <script src='/Scripts/MicrosoftAjax.js' type="text/javascript"></script> <script src='/Scripts/MicrosoftMvcAjax.js' type="text/javascript"></script> <script src='/Scripts/jquery.autocomplete.js' type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function() { UpdateFriendsOnline(); function UpdateFriendsOnline() { window.setTimeout("UpdateFriendsOnline()", 1000); $.get("/Account/GetFriendsOnline", function(data) { $("#friendsOnline").html("(" + data + ")"); }); } }); </script>

    Read the article

  • Changing the Android emulator locale automatically

    - by Christopher
    For automated testing (using Hudson) I have a script that generates a bunch of emulators for many combinations of Android OS version, screen resolution, screen density and language. This works fine, except for the language part. I need to find a way to change the Android system locale automatically. Here's some approaches I can think of, in order of preference: Extracting/editing/repacking a QEMU image directly before starting the emulator Running some sort of system-locale-changing APK on the emulator after startup Changing the locale settings on the emulator filesystem after startup Changing the locale settings in some SQLite DB on the emulator after startup Running a key sequence (via the emulator's telnet interface) that would open the settings app and change the locale Manually starting the emulator for each platform version, changing the locale by hand in the settings, saving it and archiving the images for later deployment Any ideas whether this can be done, either via the above methods or otherwise? Do you know where locale settings are persisted to/read from by the system? Solution: Thanks to dtmilano's info about the relevant properties, and some further investigation on my part, I came up with a solution even better and simpler simpler than all the ideas above! I have updated the answer below with the details.

    Read the article

  • How to use SMO.Scripter to generate a "full-script" of DB?

    - by ssg
    What I'm trying to do is a very simple task; I'd like to create a script to generate a database along with tables, SPs and UDFs. This is done with a couple of clicks on SSMS interface. However db.Script() only scripts CREATE DATABASE. Ok, so I iterate over objects one by one and script them individually. Now, what I have is an arbitrary order of CREATEs naturally failing during execution because dependent objects aren't created first. Ok so I set WithDependencies flag so dependent objects ARE scripted first. However this causes redundant CREATE scripts for objects that are already created, and causes around 20x growth in SQL file and generation time. Not to mention the errors hit during execution. I don't know if there is a way to mark objects "already walked in dependency tree", it doesn't seem likely. I might be missing a bigger picture somewhere, but MSDN recommends "Scripter" to generate scripts like the one I want. I had used Transfer class before to transfer table definitions but it fails to create a failsafe script. It doesn't make sense to use a Transfer object to generate a script anyway. I want to do this the way it should be done, and without losing my faith in SMO.

    Read the article

  • ASP.NET MVC2 JQuery datepicker errors

    - by Andy Evans
    I'm having the "Microsoft JScript runtime error: Object doesn't support this property or method" error when calling the datepicker function on a textbox generated from my data model. in the head section I have: <link href="../../Content/Site.css" rel="stylesheet" type="text/css" /> <script src="../../Scripts/jquery-1.4.1.min.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftAjax.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcValidation.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function () { $('#dob').datepicker(); }); and in the body section I have: <% Html.EnableClientValidation(); %> <% using (Html.BeginForm()) { %> ... <tr> <td class="label">Date of Birth:</td> <td><%: Html.TextBoxFor(model => model.dob, new { @class = "inputtext" })%></td> <td><%: Html.ValidationMessageFor(model => model.dob) %></td> </tr> ... <% } %> Do I have something in the wrong place? Again, you folks are a great help and assistance would be greatly appreciated.

    Read the article

  • jax-ws on glassfish3 init method

    - by Alex
    Hi all, I've created simple jax-ws (anotated Java 6 class to web service) service and deploied it on glassfish v3. The web.xml <?xml version="1.0" encoding="ISO-8859-1"?> <web-app> <servlet> <servlet-name>MyServiceName</servlet-name> <description>Blablabla</description> <servlet-class>com.foo-bar.somepackage.TheService</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>MyServiceName</servlet-name> <url-pattern>/MyServiceName</url-pattern> </servlet-mapping> <session-config> <session-timeout>30</session-timeout> </session-config> </web-app> There is no sun-jaxws.xml in the war. The service works fine but I have 2 issues: I'm using apache common configuration package to read my configuration, so i have init function that calls configuration stuff. 1. How can I configure init method for jaxws service (like i can do for the servlets for example) 2. the load on startup parameter is not affecting the service, I see that for every request init function called again (and c-tor). How can I set scope for my service? Thanks a lot,

    Read the article

  • Ajax Minifier Visual Studio include all javascript files

    - by Michael
    I am using the Ajax Minifier http://www.ajaxprojects.com/ajax/tutorialdetails.php?itemid=766 and have embedded it in the csproj file for use in Visual Studio 2008 (not the free version). I have two folders, Content and Scripts, directly under the root of the project. Also, the Content folder has subfolders, and would like to include all of these as well (if I have to manually add each subfolder that is fine as well). Currently, my csproj file looks like this (and is included within the Project tags as instructed). There are no build errors, the files simply do not get minified. (I've enabled Project - View All files) <Import Project="$(MSBuildExtensionsPath)\Microsoft\MicrosoftAjax\ajaxmin.tasks" /> <Target Name="AfterBuild"> <ItemGroup> <JS Include="Scripts\*.js" Exclude="Scripts\*.min.js;"/> <JS Include="Content\**\*.js" Exclude="Content\**\*.min.js;"/> </ItemGroup> <AjaxMin SourceFiles="@(JS)" SourceExtensionPattern="\.js$" TargetExtension=".min.js" /> </Target> How would I edit the csproj file in order to include these folders?

    Read the article

  • Toggle Android emulator network traffic from emulator invocation

    - by highphi
    I'm working on scripts to manage large amounts of Android emulators and I need to disable all network traffic on some of them. Because I'm doing all of this on a headless server, I cannot use the F8 hotkey described on the emulater documentation. I'm currently routing the TCP traffic through a null proxy with by using emulator-arm ... -http-proxy 0.0.0.0:0 and this blocks the traffic that I want it to. I thought this was working well until I noticed some strange error messages while running my scripts. The console started outputting accept too many open files and checking the open files with lsof reveals numerous messages stating "can't identify protocol" ... emulator- 19463 username 19u sock 0,6 0t0 1976595845 can't identify protocol emulator- 19463 username 20u sock 0,6 0t0 1976595847 can't identify protocol ... The only "solution" I found to this is to kill all of the emulators and then wait until this limit is reached again, which is hardly a solution at all. Is there another way to do this while invoking the emulator? Am I incorrectly using the -htt-proxy switch to block the traffic? Other people found solutions to block traffic by manually doing this by using airplane mode, but this isn't feasible for me as I'm controlling emulators via scripts. I could send keyevents to the emulator with my script and turn the phone on in airplane mode, but I would prefer something more reliable than this.

    Read the article

  • Invoking WCF service from Javascript

    - by KhanS
    I have a asp.net web application, and have some java script code in it. While calling the service I am getting the exception Service1 is undefined. Below is my code. Service: namespace WebApplication2 { // NOTE: You can use the "Rename" command on the "Refactor" menu to change the interface name "IService1" in both code and config file together. [ServiceContract(Namespace="WCFServices")] public interface IService1 { [OperationContract] string HelloWorld(); } } Implementation namespace WebApplication2 { // NOTE: You can use the "Rename" command on the "Refactor" menu to change the class name "Service1" in code, svc and config file together. [ServiceBehavior(IncludeExceptionDetailInFaults = true)] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class Service1 : IService1 { public string HelloWorld() { return "Hello world from service"; } } } ASPX page: <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> <asp:ScriptManager ID="QNAScriptManager" runat="server"> <Services> <asp:ServiceReference Path="~/Service1.svc" /> </Services> <Scripts> <asp:ScriptReference Path="~/Scripts/Questions.js" /> </Scripts> </asp:ScriptManager> </asp:Content> Java Script var ServiceProxy; function pageLoad() { ServiceProxy = new Service1(); ServiceProxy.set_defaultSucceededCallback(SucceededCallback); } function GetString() { ServiceProxy.HelloWorld(); } function SucceededCallback(result, userContext, methodName) { var RsltElem = document.getElementById("Results"); RsltElem.innerHTML = result + " from " + methodName + "."; alert("Msg received from service"); }

    Read the article

  • perl issuing os command with defined variables

    - by Vinnie Biros
    I am adding functionality into my scripts so that they can use kerberos authentication to run automatically and use secure protocols when executing. I have my functionality working for shell scripts that do exactly what i want, however i am having issues porting it to perl to work within my perl scripts as i am new to perl. Here is my working shell code and trying to get the same functionality in perl: #!/bin/sh ticketFileName=`basename $0-$$` #set filename variable to name of script plus the PID krb5CacheLocation=/tmp/$ticketFileName #set ticket cache location to /tmp + script name /usr/share/centrifydc/kerberos/bin/kinit -c $krb5CacheLocation -kt /root/.ssh/someaccount.keytab someaccount #get TGT and specifiy ticket cache location on kinit export KRB5CCNAME=$krb5CacheLocation #set the KRB5CCNAME variable to tell ssh where to look What i have attempted in perl: #!/usr/bin/perl my $ticketFileName = `basename $0-$$`; my $krb5CacheLocation = '/tmp/'.$ticketFileName; `export KRB5CCNAME=$krb5CacheLocation`; `/usr/share/centrifydc/kerberos/bin/kinit -c $krb5CacheLocation -kt /root/.ssh/unixmap0000.keytab unixmap0000`; Seems it is not liking the passed variable that i am referencing in the OS command. Anyone have any ideas or suggestions?

    Read the article

  • cl.exe Difference in object files when /E output is the same and flags are the same

    - by madiyaan damha
    Hello: I am using Visual Studio 2005's cl.exe compiler. I call it with a bunch of /I /D and some compilation/optimization flags (example: /Ehsc). I have two compilation scripts, and both differ only in the /I flags (include directories are different). All other flags are the same. These scripts produce different object files (and not just a timestamp difference as noted below). The strange thing is that the /E output of both scripts is the same. That means that the include files are not causing the difference in object files, but then again, where is the difference coming from? Can anyone elucidate on how I am seeing two different object files in my situation. If the include files are causing the difference, how come I see identical /E output? PS. The object files are different not only in the timestamp, but in the code sections also. In fact the behavior of my final executable is different in both cases. Edit: PSS: I even looked at the /includeFiles output of cl.exe and that output is identical. The object files, however, differ in more than just the timestamp (in fact, one is 1KB bigger than another!)

    Read the article

  • Passing custom Python objects to nosetests

    - by Rob
    I am attempting to re-organize our test libraries for automation and nose seems really promising. My question is, what is the best strategy for passing Python objects into nose tests? Our tests are organized in a testlib with a bunch of modules that exercise different types of request operations. Something like this: testlib \-testmoda \-testmodb \-testmodc In some cases the test modules (i.e. testmoda) is nothing but test_something(), test_something2() functions while in some cases we have a TestModB class in testmob with the test_anotherthing1(), test_anotherthing2() functions. The cool thing is that nose easily finds both. Most of those test functions are request factory stuff that can easily share a single connection to our server farm. Thus we do a lot of test_something1(cnn), TestModB.test_anotherthing2(cnn), etc. Currently we don't use nose, instead we have a hodge-podge of homegrown driver scripts with hard-coded lists of tests to execute. Each of those driver scripts creates its own connection object. Maintaining those scripts and the connection minutia is painful. I'd like to take free advantage of nose's beautiful discovery functionality, passing in a connection object of my choosing. Thanks in advance! Rob P.S. The connection objects are not pickle-able. :(

    Read the article

  • Any suggestions for good automated web load testing tool?

    - by fmunkert
    What are some good automated tools for load testing (stress testing) web applications, that do not use record and replay of HTTP network packets? I am aware that there are numerous load testing tools on the market that record and replay HTTP network packets. But these are unsuitable for my purpose, because of this: The HTTP packet format changes very often in our application (e.g. when we optimize an AJAX call). We do not want to adapt all test scripts just because there is a slight change in HTTP packet format. Our test team shall not need to know any internals about our application to write their test scripts. A tool that replays HTTP packets, however, requires the team to know the format of HTTP requests and responses, such that they can adapt details of the replayed HTTP packets (e.g. user name). The automated load testing tool I am looking for should be able to let the test team write "black box" test scripts such as: Invoke web page at URL http://... . First, enter XXX into text field XXX. Then, press button XXX. Wait until response has been received from web server. Verify that text field XXX now contains the text XXX. The tool should be able to simulate up to several 1000 users, and it should be compatible with web applications using ASP.NET and AJAX.

    Read the article

  • Getting error: "This webpage is not available" for my chrome app's options page

    - by Don Rhummy
    My CRX had the proper html page options.html in it, the manifest declares it properly (it shows up as a link on the chrome://extensions page) but when I click that link, Chrome gives the error: This webpage is not available The webpage at chrome-extension://invalid/ might be temporarily down or it may have moved permanently to a new web address. It says "invalid" but the app runs perfectly well (all the content scripts run, the background created a database and saved to it). Why would it show as invalid? Why doesn't it have the extensions' id? Here's the manifest: { "manifest_version": 2, "name": "MyAPP", "description": "My App", "version": "0.0.0.32", "minimum_chrome_version": "27", "offline_enabled": true, "options_page": "options.html", "icons": { "16": "images/icon16.png", "48": "images/icon48.png", "128": "images/icon128.png" }, "app": { "background": { "scripts": [ "scripts/background.js" ] } }, "permissions": [ "unlimitedStorage", "fullscreen", { "fileSystem": [ "write" ] }, "background", "<all_urls>", "tabs" ] } Does it need to be declared in "web_accessible_resources"? Any idea what's wrong? Update Adding to "web_accessible_resources" does not fix the issue. I added everything on that page too. update 2 It looks like it might be a Chrome bug for packaged apps. When I remove the "app" section in the manifest, it works! This is a bug since the Chrome app documentation states that apps can have options pages: https://developer.chrome.com/apps/options.html

    Read the article

  • javascript - Detect if Google Analytics is loaded yet?

    - by Geuis
    I'm working on a project here that will store some info in Google Analytics custom variables. The script I'm building out needs to detect if GA has loaded yet before I can push data to it. The project is being designed to work across any kind of site that uses GA. The problem is reliably detecting if GA has finished loading or not and is available. A couple of variabilities here: 1) There's multiple methods of loading GA. Older scripts from the Urchin days up to the latest asynchronous scripts. Some of these are inline, some are asynchronous. Also, some sites do custom methods of loading GA, like at my job. We use YUI getScript to load it. 2) Variable-variable names. In some scripts, the variable name assigned to GA is "pageTracker". In others, its "_gaq". Then there's the infinity of custom variable names that sites could be using for their implementation of GA. So does anyone have any thoughts on what might be a reliable way to check if Google Analytics is being used on the page, and if it's been loaded?

    Read the article

  • How to avoid chaotic ASP.NET web application deployment?

    - by emzero
    Ok, so here's the thing. I'm developing an existing (it started being an ASP classic app, so you can imagine :P) web application under ASP.NET 4.0 and SQLServer 2005. We are 4 developers using local instances of SQL Server 2005 Express, having the source-code and the Visual Studio database project This webapp has several "universes" (that's how we call it). Every universe has its own database (currently on the same server) but they all share the same schema (tables, sprocs, etc) and the same source/site code. So manually deploying is really annoying, because I have to deploy the source code and then run the sql scripts manually on each database. I know that manual deploying can cause problems, so I'm looking for a way of automating it. We've recently created a Visual Studio Database Project to manage the schema and generate the diff-schema scripts with different targets. I don't have idea how to put the pieces together I would like to: Have a way to make a "sync" deploy to a target server (thanksfully I have full RDC access to the servers so I can install things if required). With "sync" deploy I mean that I don't want to fully deploy the whole application, because it has lots of files and I just want to deploy those new or changed. Generate diff-sql update scripts for every database target and combine it to just 1 script. For this I should have some list of the databases names somewhere. Copy the site files and executing the generated sql script in an easy and automated way. I've read about MSBuild, MS WebDeploy, NAnt, etc. But I don't really know where to start and I really want to get rid of this manual deploy. If there is a better and easier way of doing it than what I enumerated, I'll be pleased to read your option. I know this is not a very specific question but I've googled a lot about it and it seems I cannot figure out how to do it. I've never used any automation tool to deploy. Any help will be really appreciated, Thank you all, Regards

    Read the article

  • 'JQuery' is undefined

    - by Raja
    I am working on a ASP.net project created with local file system settings. I am using MVC and Jquery. Jquery is working fine when I run the application in debug mode i.e. in ASP.net Development server. I am trying to host the application in IIS 7. In hosted mode, it does not recognize Jquery and gives scripting error 'Jquery is undefined'. The locations of the script files is unchanged in both modes. Can anybody have any clue what can be the reason and how to solve this. My code look like this; <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <script src="../../Scripts/MicrosoftAjax.debug.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcAjax.debug.js" type="text/javascript"></script> <script src="../../Scripts/jquery-1.2.6.js" type="text/javascript"></script> <!-- YUI Styles --> <link href="../../Content/reset.css" rel="stylesheet" type="text/css" /> <link href="../../Content/fonts.css" rel="stylesheet" type="text/css" /> <link href="../../Content/grids.css" rel="stylesheet" type="text/css" /> <!-- /YUI Styles --> <link href="../../Content/knowledgebase.css" rel="stylesheet" type="text/css" /> <script type="text/javascript"> //this hides the javascript warning if javascript is enabled (function($) { $(document).ready(function() { $('#jswarning').hide(); }); })(jQuery); </script> <asp:ContentPlaceHolder ID="ScriptContent" runat="server" /> ....

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >