Search Results

Search found 93612 results on 3745 pages for 'inquisitive one'.

Page 222/3745 | < Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >

  • Need help with tweaking a Dropdown with three options, user selects one - a hidden div appears with correct fields

    - by Jack
    Hello Again, I posted a question on here and got an answer within minutes I'm hoping the wizards on here can help me again. Okay so I'm using a script I found online to try and add this function to a shopping cart form. Here's the setup. I have a payment method dropdown with Visa, Mastercard and Bank Withdrawal as the options. For the credit cards I have one hidden div with a certain set of fields, and for the bank I have another hidden div. Each of the divs have named ID's - #payCredit and #payBank The css for both have margin: 0px and display: none; Here's a peice of javascript I used successfuly on a shipping address checkbox `function toggleLayer( whichLayer ) { var elem, vis; if( document.getElementById ) // this is the way the standards work elem = document.getElementById( whichLayer ); else if( document.all ) // this is the way old msie versions work elem = document.all[whichLayer]; else if( document.layers ) // this is the way nn4 works elem = document.layers[whichLayer]; vis = elem.style; // if the style.display value is blank we try to figure it out here if(vis.display==''&&elem.offsetWidth!=undefined&&elem.offsetHeight!=undefined) vis.display = (elem.offsetWidth!=0&&elem.offsetHeight!=0)?'block':'none'; vis.display = (vis.display==''||vis.display=='block')?'none':'block'; }` I was hoping I could change it slightly to meet my needs. Here's the dropdown <label>Payment Method:</label> <select name="payment" id="payment" class="dropdown3" style="width:8em"> <option selected="selected">Select</option> <option value="Visa" onclick="javascript:toggleLayer('payCredit');">Visa</option> <option value="MasterCard" onclick="javascript:toggleLayer('payCredit');">Mastercard</option> <option value="Direct" onclick="javascript:toggleLayer('payBank');">Direct Withdraw</option> </select></li> The current result is that it kinda works. I can open the dropdown and select Visa and it appears, if I select Visa again it disappears, if I select Visa and then select bank, both appear. You can see what i'm working on here - http://test.sharevetcare.com/bestfriends/cart3.html

    Read the article

  • ASP.NET MVC 2 Hosting :: MVC2 deploy - Could not load file or assembly 'System.Web.MVC, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies.

    - by mbridge
    A new MVC 2 project worked on my local machine but when it was deployed to the test server it gave the error 'Could not load file or assembly 'System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies.' I have the full Visual Studio 2010 installed on my local machine but just the .NET 4 framework installed on the test servers. It seems the MVC assemblies do not come with .NET 4 framework itself so how to make MVC 2 work on the test servers? When I installed Visual Studio 2010 on my local machine it came with all the fruit included (.NET4 framework, MVC2 etc) so the System.Web.Mvc.dll can be found in my machine's GAC (C:\Windows\assembly). However, since there is no need to bloat the web servers only the plain old .NET4 framework has been installed on the test servers. This does NOT include the MVC assembly and that is why it cannot be found by the web application. You need this assembly to be on your web servers that you are deploying to but you want to avoid having to copy them into the GAC manually and doing all that gacutil mess or maybe you don't have access to your servers if they are hosted by provider. Solution You need to make the System.Web.Mvc assembly bin deployable... okay that doesn't sound easy but here is how to do it for the necessary MVC references: Simply right click the reference and select 'Properties' Then change 'Copy Local' to 'True': Note If your server has .NET 3.5 sp1 installed the new(ish) assemblies System.Web.Routing and System.Web.Abstractions will already be in the GAC. If you had previously deployed an MVC 1 application to a .NET 3.5 server you may remember having to deploy the other two assemblies too. Since MVC2 requires at least .NET 3.5 sp1 you will not need to worry about these assemblies, just System.Web.Mvc

    Read the article

  • Could not load file or assembly 'System.Data.SQLite' or one of its dependencies. An attempt was made to load a program with an incorrect format.

    - by Om Talsania
    Problem Description: Could not load file or assembly 'System.Data.SQLite' or one of its dependencies. An attempt was made to load a program with an incorrect format. Likely to be reproduced when: You will usually encounter this problem when you have downloaded a sample application that is a 32-bit application targeted for ASP.NET 2.0 or 3.5, and you have IIS7 on a 64-bit OS running .NET 4.0, because the default setting for running 32-bit application on IIS7 with 64-bit OS is false. Resolution: 1. Go to IIS Management Console Start -> Administration Tools -> Internet Information Services (IIS) Manager 2. Expand your server in the left pane and go to Application Pools 3. Right click to select ‘Add Application Pool’  4. Create anew AppPool. I have named it ASP.NET v2.0 AppPool (32-bit) and selected .NET Framework v2.0.50727 because I intend to run my ASP.NET 3.5 application on it. 5. Now right click the newly created AppPool and select Advanced Settings 6. Change the property “Enable 32-Bit Applications” from False to True  7. Now select your actual web application from the left panel. Right click the web application, and go to Manage Application -> Advanced Settings  8. Change the Property “Application Pool” to your newly created AppPool.  And… the error is gone…

    Read the article

  • getting database connectivity issue from only one part of my ASP.net project?

    - by Greg
    Hi, Something weird has started happening to my project with Dynamic Data. Suddenly I am now getting connection errors when going to the DD navigation pages, but things work fine on the default DD main page, and also other MVC pages I've added in myself work fine to the database. Any ideas? So in summary if I go to the following web pages: custom URL for my custom controller - this works fine, including getting database data main DD page from root URL - this works fine click on a link to a table maintenance page from the main DD page - GET DATABASE CONNECTIVITY ERROR Some items: * I'm using SQL Server Express 2008 * Doesn't seem to be any debug/error info in VS2010 at all I can see * Web config entry: <add name="Model1Container" connectionString="metadata=res://*/Model1.csdl|res://*/Model1.ssdl|res://*/Model1.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=GREG\SQLEXPRESS_2008;Initial Catalog=greg_development;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient"/> Error: Server Error in '/' Application. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) Source Error: Line 39: DropDownList1.Items.Add(new ListItem("[Not Set]", NullValueString)); Line 40: } Line 41: PopulateListControl(DropDownList1); Line 42: // Set the initial value if there is one Line 43: string initialValue = DefaultValue; Source File: U:\My Dropbox\source\ToplogyLibrary\Topology_Web_Dynamic\DynamicData\Filters\ForeignKey.ascx.cs Line: 41 Stack Trace: [SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified)] System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +5009598 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning() +234 System.Data.SqlClient.TdsParser.Connect(ServerInfo serverInfo, SqlInternalConnectionTds connHandler, Boolean ignoreSniOpenTimeout, Int64 timerExpire, Boolean encrypt, Boolean trustServerCert, Boolean integratedSecurity) +341 System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, TimeoutTimer timeout, SqlConnection owningObject) +129 System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(ServerInfo serverInfo, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, TimeoutTimer timeout) +239 System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, TimeoutTimer timeout, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) +195 System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) +232 System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) +185 System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) +33 System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) +524 System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) +66 System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) +479 System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) +108 System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) +126 System.Data.SqlClient.SqlConnection.Open() +125 System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean openCondition, DbConnection storeConnectionToOpen, DbConnection originalConnection, String exceptionCode, String attemptedOperation, Boolean& closeStoreConnectionOnFailure) +52 [EntityException: The underlying provider failed on Open.] System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean openCondition, DbConnection storeConnectionToOpen, DbConnection originalConnection, String exceptionCode, String attemptedOperation, Boolean& closeStoreConnectionOnFailure) +161 System.Data.EntityClient.EntityConnection.Open() +98 System.Data.Objects.ObjectContext.EnsureConnection() +81 System.Data.Objects.ObjectQuery`1.GetResults(Nullable`1 forMergeOption) +46 System.Data.Objects.ObjectQuery`1.System.Collections.Generic.IEnumerable<T>.GetEnumerator() +44 System.Data.Objects.ObjectQuery`1.GetEnumeratorInternal() +36 System.Data.Objects.ObjectQuery.System.Collections.IEnumerable.GetEnumerator() +10 System.Web.DynamicData.Misc.FillListItemCollection(IMetaTable table, ListItemCollection listItemCollection) +50 System.Web.DynamicData.QueryableFilterUserControl.PopulateListControl(ListControl listControl) +85 Topology_Web_Dynamic.ForeignKeyFilter.Page_Init(Object sender, EventArgs e) in U:\My Dropbox\source\ToplogyLibrary\Topology_Web_Dynamic\DynamicData\Filters\ForeignKey.ascx.cs:41 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +14 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +35 System.Web.UI.Control.OnInit(EventArgs e) +91 System.Web.UI.UserControl.OnInit(EventArgs e) +83 System.Web.UI.Control.InitRecursive(Control namingContainer) +140 System.Web.UI.Control.AddedControl(Control control, Int32 index) +197 System.Web.UI.ControlCollection.Add(Control child) +79 System.Web.DynamicData.DynamicFilter.EnsureInit(IQueryableDataSource dataSource) +200 System.Web.DynamicData.QueryableFilterRepeater.<Page_InitComplete>b__1(DynamicFilter f) +11 System.Collections.Generic.List`1.ForEach(Action`1 action) +145 System.Web.DynamicData.QueryableFilterRepeater.Page_InitComplete(Object sender, EventArgs e) +607 System.EventHandler.Invoke(Object sender, EventArgs e) +0 System.Web.UI.Page.OnInitComplete(EventArgs e) +8871862 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +604 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.1

    Read the article

  • How to have two UIPickerViews together in one ViewController?

    - by 0SX
    I'm trying to put 2 UIPickerViews together in one ViewController. Each UIPickerView has different data arrays. I'm using interface builder to link the pickers up. I know I'll have to use separate delegates and dataSources but I can't seem to hook everything up with interface builder correctly. Here's all my code: pickerTesting.h #import <UIKit/UIKit.h> #import "picker2DataSource.h" @interface pickerTestingViewController : UIViewController <UIPickerViewDelegate, UIPickerViewDataSource>{ IBOutlet UIPickerView *picker; IBOutlet UIPickerView *picker2; NSMutableArray *pickerViewArray; } @property (nonatomic, retain) IBOutlet UIPickerView *picker; @property (nonatomic, retain) IBOutlet UIPickerView *picker2; @property (nonatomic, retain) NSMutableArray *pickerViewArray; @end pickerTesting.m #import "pickerTestingViewController.h" @implementation pickerTestingViewController @synthesize picker, picker2, pickerViewArray; - (void)viewDidLoad { [super viewDidLoad]; pickerViewArray = [[NSMutableArray alloc] init]; [pickerViewArray addObject:@" 100 "]; [pickerViewArray addObject:@" 200 "]; [pickerViewArray addObject:@" 400 "]; [pickerViewArray addObject:@" 600 "]; [pickerViewArray addObject:@" 1000 "]; [picker selectRow:1 inComponent:0 animated:NO]; picker2.delegate = self; picker2.dataSource = self; } - (NSInteger)numberOfComponentsInPickerView:(UIPickerView *)picker; { return 1; } - (void)pickerView:(UIPickerView *)picker didSelectRow:(NSInteger)row inComponent:(NSInteger)component { } - (NSInteger)pickerView:(UIPickerView *)picker numberOfRowsInComponent:(NSInteger)component; { return [pickerViewArray count]; } - (NSString *)pickerView:(UIPickerView *)picker titleForRow:(NSInteger)row forComponent:(NSInteger)component; { return [pickerViewArray objectAtIndex:row]; } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [super dealloc]; } @end And I have a separate class for the other datasource. picker2DataSource.h @interface picker2DataSource : NSObject <UIPickerViewDataSource, UIPickerViewDelegate> { NSMutableArray *customPickerArray; } @property (nonatomic, retain) NSMutableArray *customPickerArray; @end picker2DataSource.m #import "picker2DataSource.h" @implementation picker2DataSource @synthesize customPickerArray; - (id)init { // use predetermined frame size self = [super init]; if (self) { customPickerArray = [[NSMutableArray alloc] init]; [customPickerArray addObject:@" a "]; [customPickerArray addObject:@" b "]; [customPickerArray addObject:@" c "]; [customPickerArray addObject:@" d "]; [customPickerArray addObject:@" e "]; } return self; } - (void)dealloc { [customPickerArray release]; [super dealloc]; } - (NSInteger)numberOfComponentsInPickerView:(UIPickerView *)picker2; { return 1; } - (void)pickerView:(UIPickerView *)picker2 didSelectRow:(NSInteger)row inComponent:(NSInteger)component { } - (NSInteger)pickerView:(UIPickerView *)picker2 numberOfRowsInComponent:(NSInteger)component; { return [customPickerArray count]; } - (NSString *)pickerView:(UIPickerView *)picker2 titleForRow:(NSInteger)row forComponent:(NSInteger)component; { return [customPickerArray objectAtIndex:row]; } @end Any help or code examples would be great. Thanks.

    Read the article

  • How to manage a MotionEvent going from one View to another?

    - by Darren
    I have a SurfaceView that takes up part of the screen, and some buttons along the bottom. When a button is pressed and the user drags, I want to be able to drag a picture (based on the button) onto the SurfaceView and have it drawn there. I want to be able to use clickListeners and the like, and not just have a giant SurfaceView with me writing code to detect where the user pressed and if it's a button, etc. I have somewhat of a solution, but it seems a bit of a hack to me. What is the best way to accomplish this using the framework intelligently? Part of my XML: <RelativeLayout android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="@drawable/background"> <!-- Place buttons along the bottom --> <RelativeLayout android:id="@+id/bottom_bar" android:orientation="horizontal" android:layout_width="fill_parent" android:layout_height="40dip" android:layout_alignParentBottom="true" android:background="@null"> <ImageButton android:id="@+id/btn_1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerVertical="true" android:background="@null" android:src="@drawable/btn_1"> </ImageButton> <!-- More buttons here... --> </RelativeLayout> <!-- Place the SurfaceView in a frame so we can stack on top of it --> <FrameLayout android:layout_width="fill_parent" android:layout_height="0px" android:layout_weight="1" android:layout_above="@id/bottom_bar"> <com.project.question.MySurfaceView android:id="@+id/my_view" android:layout_width="fill_parent" android:layout_height="fill_parent" /> </FrameLayout> And the relevant Java code in MySurfaceView, which extends SurfaceView. mTouchX and Y are used in the onDraw method to draw the image: @Override public boolean onTouchEvent(MotionEvent event){ mTouchX = (int) event.getX(); mTouchY = (int) event.getY(); return true; } public void onButtonTouchEvent(MotionEvent event){ event.setLocation(event.getX(), event.getY() + mScreenHeight); onTouchEvent(event); } Finally, the activity: @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); requestWindowFeature(Window.FEATURE_NO_TITLE); setContentView(R.layout.my_surface); mView = (MySurfaceView) findViewById(R.id.my_view); mSurfaceHeight = mView.getHeight(); mBtn = (ImageButton) findViewById(R.id.btn_1); mBtn.setOnTouchListener(mTouchListener); } OnTouchListener mTouchListener = new OnTouchListener() { public boolean onTouch(View v, MotionEvent event) { int [] location = new int[2]; v.getLocationOnScreen(location); event.setLocation(event.getX() + location[0], event.getY()); mView.onButtonTouchEvent(event); return true; } }; Strangely, one has to add to the x-coordinate in the activity, then add to the y coordinate in the View. Otherwise, it doesn't show up in the correct position. If you add nothing, something drawn using mTouchX and mTouchY will show up in the upper left corner of the SurfaceView. Any direction would be greatly appreciated. If I'm going about this completely the wrong way, that would be good information too.

    Read the article

  • Grails validation problems with sets of data: only getting one error message for all errors in a set

    - by Matt
    Hi, I'm trying to validate a domain class that has a number of subsets. class IebeUser { ... static hasMany = [openUserAnswers:OpenUserAnswer, closedUserAnswers:ClosedUserAnswer] } class OpenUserAnswer { OpenQuestion openQuestion String text static belongsTo = [user:IebeUser] static constraints = { openQuestion(nullable:false) text(blank:false) } } class ClosedUserAnswer { ClosedQuestion closedQuestion ClosedAnswer answer static belongsTo = [user:IebeUser] static constraints = { closedQuestion(nullable:false) answer(nullable:false) } } A closed question has a set of predefined answers and an open question lets the user enter a freeform answer. All is well until I come to validate the object after entry in a form: params: [closedUserAnswers[0].answer.id:, closedUserAnswers[0]:[answer:[id:], answer.id:], password:dfgdfgdf, openUserAnswers[0].text:gdfgdfgdfg, openUserAnswers[0]:[text:gdfgdfgdfg], _isOptedOut:, create:Continue, username:gdfgdfggdf, email:[email protected], closedUserAnswers[1].answer.id:, closedUserAnswers[1]:[answer:[id:], answer.id:], openUserAnswers[1].text:, openUserAnswers[1]:[text:], firstName:dfgdf, lastName:gdfgdfgd, action:save, controller:main] The key bits being: closedUserAnswers[0].answer.id:, closedUserAnswers[0]:[answer:[id:] closedUserAnswers[1].answer.id:, closedUserAnswers[1]:[answer:[id:] openUserAnswers[1].text:, openUserAnswers[1]:[text:] In my tests I have two objects of type closedUserAnswers and two of openUserAnswers. But when I call validation on IebeUser I only get validation errors for the closedUserAnswers or the openUserAnswers as a whole. I don't get validation errors for each object with a problem which is what I need. I really need an error per instance. Does anyone know what I'm doing wrong? Even when I call the validate method against each closedUserAnswer/openUserAnswer I still only get one per type. Here are my errors. Sorry for all the code, but thought I'd include as much of the code as possible so that it makes sense. Field error in object 'uk.co.cascaid.iebe.IebeUser' on field 'openUserAnswers.text': rejected value []; codes [uk.co.cascaid.iebe.OpenUserAnswer.text.blank.error.uk.co.cascaid.iebe.IebeUser.openUserAnswers.text,uk.co.cascaid.iebe.OpenUserAnswer.text.blank.error.openUserAnswers.text,uk.co.cascaid.iebe.OpenUserAnswer.text.blank.error.text,uk.co.cascaid.iebe.OpenUserAnswer.text.blank.error,openUserAnswer.text.blank.error.uk.co.cascaid.iebe.IebeUser.openUserAnswers.text,openUserAnswer.text.blank.error.openUserAnswers.text,openUserAnswer.text.blank.error.text,openUserAnswer.text.blank.error,uk.co.cascaid.iebe.OpenUserAnswer.text.blank.uk.co.cascaid.iebe.IebeUser.openUserAnswers.text,uk.co.cascaid.iebe.OpenUserAnswer.text.blank.openUserAnswers.text,uk.co.cascaid.iebe.OpenUserAnswer.text.blank.text,uk.co.cascaid.iebe.OpenUserAnswer.text.blank,openUserAnswer.text.blank.uk.co.cascaid.iebe.IebeUser.openUserAnswers.text,openUserAnswer.text.blank.openUserAnswers.text,openUserAnswer.text.blank.text,openUserAnswer.text.blank,blank.uk.co.cascaid.iebe.IebeUser.openUserAnswers.text,blank.openUserAnswers.text,blank.text,blank]; arguments [text,class uk.co.cascaid.iebe.OpenUserAnswer]; default message [Property [{0}] of class [{1}] cannot be blank] Field error in object 'uk.co.cascaid.iebe.IebeUser' on field 'closedUserAnswers.answer': rejected value [null]; codes [uk.co.cascaid.iebe.ClosedUserAnswer.answer.nullable.error.uk.co.cascaid.iebe.IebeUser.closedUserAnswers.answer,uk.co.cascaid.iebe.ClosedUserAnswer.answer.nullable.error.closedUserAnswers.answer,uk.co.cascaid.iebe.ClosedUserAnswer.answer.nullable.error.answer,uk.co.cascaid.iebe.ClosedUserAnswer.answer.nullable.error,closedUserAnswer.answer.nullable.error.uk.co.cascaid.iebe.IebeUser.closedUserAnswers.answer,closedUserAnswer.answer.nullable.error.closedUserAnswers.answer,closedUserAnswer.answer.nullable.error.answer,closedUserAnswer.answer.nullable.error,uk.co.cascaid.iebe.ClosedUserAnswer.answer.nullable.uk.co.cascaid.iebe.IebeUser.closedUserAnswers.answer,uk.co.cascaid.iebe.ClosedUserAnswer.answer.nullable.closedUserAnswers.answer,uk.co.cascaid.iebe.ClosedUserAnswer.answer.nullable.answer,uk.co.cascaid.iebe.ClosedUserAnswer.answer.nullable,closedUserAnswer.answer.nullable.uk.co.cascaid.iebe.IebeUser.closedUserAnswers.answer,closedUserAnswer.answer.nullable.closedUserAnswers.answer,closedUserAnswer.answer.nullable.answer,closedUserAnswer.answer.nullable,nullable.uk.co.cascaid.iebe.IebeUser.closedUserAnswers.answer,nullable.closedUserAnswers.answer,nullable.answer,nullable]; arguments [answer,class uk.co.cascaid.iebe.ClosedUserAnswer]; default message [Property [{0}] of class [{1}] cannot be null]

    Read the article

  • Why only one video shows up in ASP.NET Video panel ?

    - by user284523
    It is possible to have multiple videos with Flowplayer http://flowplayer.org/demos/installation/multiple-players.html To do so in ASP.NET DYNAMICALLY, one has to use a Panel (see http://idevwebs.com/Default.aspx?id=25): <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <!-- A minimal Flowplayer setup to get you started --> <!-- include flowplayer JavaScript file that does Flash embedding and provides the Flowplayer API. --> <script type="text/javascript" src="flowplayer-3.1.4.min.js"></script> <!-- page title --> <title>Minimal Flowplayer setup</title> </head><body> <form id="form1" runat="server"> <div id="page"> <h1>Minimal Flowplayer setup</h1> <p>View commented source code to get familiar with Flowplayer installation.</p> <!-- this A tag is where your Flowplayer will be placed. it can be anywhere --> <!-- player #1 --> <asp:Panel ID="panelVideo" runat="server"> </asp:Panel> <!-- this will install flowplayer inside previous A- tag. --> <script type="text/javascript"> flowplayer("player", "flowplayer-3.1.5.swf"); </script> <!-- after this line is purely informational stuff. does not affect on Flowplayer functionality --> <p> If you are running these examples <strong>locally</strong> and not on some webserver you must edit your <a href="http://www.macromedia.com/support/documentation/en/flashplayer/help/settings_manager04.html"> Flash security settings</a>. </p> <p class="less"> Select "Edit locations" &gt; "Add location" &gt; "Browse for files" and select flowplayer-x.x.x.swf you just downloaded. </p> <h2>Documentation</h2> <p> <a href="http://flowplayer.org/documentation/installation/index.html">Flowplayer installation</a> </p> <p> <a href="http://flowplayer.org/documentation/configuration/index.html">Flowplayer configuration</a> </p> <p> See this identical page on <a href="http://flowplayer.org/demos/example/index.htm">Flowplayer website</a> </p> </div> </form> </body></html> and then add video anchor dynamically public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { this.panelVideo.Controls.Add(new LiteralControl("<a href=\"" + "http://blip.tv/file/get/" + "KimAronson-TwentySeconds58192" + ".flv\" style=\"display:block;width:388px;height:230px\" id=\"player\"></a>")); this.panelVideo.Controls.Add(new LiteralControl("<a href=\"" + "http://blip.tv/file/get/" + "KimAronson-TwentySeconds67463" + ".flv\" style=\"display:block;width:388px;height:230px\" id=\"player\"></a>")); } } But it works only partially as the first video shows up not both. And all html below doesn't show up either.

    Read the article

  • pass an ID with hyperlik but cant get this ID value from a fk in one table when i click in insert

    - by susan
    Something strange happened in my codes, actually I have a hyperlink that pass ID value in a query string to second page.in second page i have 2 sql datasource that both these sql datasources should get this id value and pass it to a filter parameter to show sth in datalist. so in another word I have a first page that has an hyperlink read ID value from a datasource and pass it to second page.its like below: <asp:HyperLink ID="HyperLink1" runat="server" NavigateUrl='<%# "~/forumpage.aspx?ID="+Eval("ID")%>'><%#Eval("title")%> </asp:HyperLink> then in second page i have one sql datasource with a query like this ...where ID=@id and get this id in query string from db.it work great . but i have problem with second sql datasource in second page it has a query sth like below:...forms.question_id=@id then in sql reference both to query string as ID that get by first page in hyperlink. but when i click in insert button show me error with fk. error:Error:The INSERT statement conflicted with the FOREIGN KEY constraint "FK_forumreply_forumquestions". The conflict occurred in database "forum", table "dbo.forumquestions", column 'ID'. The statement has been terminated. my tables (question(ID,user_id(fk),Cat_id(fk),title,bodytext) (reply(ID,userr_id(fk),questionn_id(fk),titlereply,bodytestreply); When by hand in cb i gave a number in questionn_id like 1 it show me successful but when it want read from a filter by datasource this field face with problem. plzzzz help i really need skip from this part.and cause i am new i guess I cant understand the logic way clearly. <asp:SqlDataSource ID="sdsreply" runat="server" ConnectionString="<%$ ConnectionStrings:forumConnectionString %>" SelectCommand="SELECT forumreply.ID, forumreply.userr_id, forumreply.questionn_id, forumreply.bodytextreply, forumreply.datetimereply, forumquestions.ID AS Expr1, forumusers.ID AS Expr2, forumusers.username FROM forumquestions INNER JOIN forumreply ON forumquestions.ID = forumreply.questionn_id INNER JOIN forumusers ON forumquestions.user_id = forumusers.ID AND forumreply.userr_id = forumusers.ID where forumreply.questionn_id=@questionn_id"> <SelectParameters> <asp:QueryStringParameter Name="questionn_id" QueryStringField="ID" /> </SelectParameters> </asp:SqlDataSource> it is cb for second page in insert button: { if (Session["userid"] != null) { lblreply.Text = Session["userid"].ToString(); } else { Session["userid"]=null; } if (HttpContext.Current.User.Identity.IsAuthenticated) { lblshow.Text = string.Empty; string d = HttpContext.Current.User.Identity.Name; lblshow.Text =d + "???? ??? ?????." ; foreach (DataListItem item in DataList2.Items) { Label questionn_idLabel = (Label)item.FindControl("questionn_idLabel"); Label userr_idLabel = (Label)item.FindControl("userr_idLabel"); lbltest.Text = string.Empty; lbltest.Text = questionn_idLabel.Text; lblreply.Text = string.Empty; lblreply.Text = userr_idLabel.Text; } } else { lblshow.Text = "??? ??? ??? ??? ?? ?? ?????? ???? ???? ???? ????? ??? ??? ? ??? ????? ???????."; } } { if(HttpContext.Current.User.Identity.IsAuthenticated) { if (Page.IsValid) { SqlConnection con = new SqlConnection(ConfigurationManager.ConnectionStrings["forumConnectionString"].ConnectionString); try { con.Open(); SqlCommand cmd = new SqlCommand("insert into forumreply (userr_id,questionn_id,bodytextreply,datetimereply)values(@userr_id,@questionn_id,@bodytextreply,@datetimereply)", con); cmd.Parameters.AddWithValue("userr_id",lblreply.Text); cmd.Parameters.AddWithValue("questionn_id",lbltest.Text); cmd.Parameters.AddWithValue("bodytextreply",txtbody.Text); cmd.Parameters.AddWithValue("datetimereply",DateTime.Now ); cmd.ExecuteNonQuery(); } catch (Exception exp) { Response.Write("<b>Error:</b>"); Response.Write(exp.Message); } finally { con.Close(); } lblmsg.Text = "???? ??? ?? ?????? ??? ?????.thx"; lblshow.Visible = false; //lbltxt.Text = txtbody.Text; txtbody.Text = string.Empty; } } else { lblmsg.Text = string.Empty; Session["rem"] = Request.UrlReferrer.AbsoluteUri; Response.Redirect("~/login.aspx"); } }

    Read the article

  • Watching setTimeout loops so that only one is running at a time.

    - by DA
    I'm creating a content rotator in jQuery. 5 items total. Item 1 fades in, pauses 10 seconds, fades out, then item 2 fades in. Repeat. Simple enough. Using setTimeout I can call a set of functions that create a loop and will repeat the process indefinitely. I now want to add the ability to interrupt this rotator at any time by clicking on a navigation element to jump directly to one of the content items. I originally started going down the path of pinging a variable constantly (say every half second) that would check to see if a navigation element was clicked and, if so, abandon the loop, then restart the loop based on the item that was clicked. The challenge I ran into was how to actually ping a variable via a timer. The solution is to dive into JavaScript closures...which are a little over my head but definitely something I need to delve into more. However, in the process of that, I came up with an alternative option that actually seems to be better performance-wise (theoretically, at least). I have a sample running here: http://jsbin.com/uxupi/14 (It's using console.log so have fireBug running) Sample script: $(document).ready(function(){ var loopCount = 0; $('p#hello').click(function(){ loopCount++; doThatThing(loopCount); }) function doThatOtherThing(currentLoopCount) { console.log('doThatOtherThing-'+currentLoopCount); if(currentLoopCount==loopCount){ setTimeout(function(){doThatThing(currentLoopCount)},5000) } } function doThatThing(currentLoopCount) { console.log('doThatThing-'+currentLoopCount); if(currentLoopCount==loopCount){ setTimeout(function(){doThatOtherThing(currentLoopCount)},5000); } } }) The logic being that every click of the trigger element will kick off the loop passing into itself a variable equal to the current value of the global variable. That variable gets passed back and forth between the functions in the loop. Each click of the trigger also increments the global variable so that subsequent calls of the loop have a unique local variable. Then, within the loop, before the next step of each loop is called, it checks to see if the variable it has still matches the global variable. If not, it knows that a new loop has already been activated so it just ends the existing loop. Thoughts on this? Valid solution? Better options? Caveats? Dangers? UPDATE: I'm using John's suggestion below via the clearTimeout option. However, I can't quite get it to work. The logic is as such: var slideNumber = 0; var timeout = null; function startLoop(slideNumber) { ...do stuff here to set up the slide based on slideNumber... slideFadeIn() } function continueCheck(){ if (timeout != null) { // cancel the scheduled task. clearTimeout(timeout); timeout = null; return false; }else{ return true; } }; function slideFadeIn() { if (continueCheck){ // a new loop hasn't been called yet so proceed... // fade in the LI $currentListItem.fadeIn(fade, function() { if(multipleFeatures){ timeout = setTimeout(slideFadeOut,display); } }); }; function slideFadeOut() { if (continueLoop){ // a new loop hasn't been called yet so proceed... slideNumber=slideNumber+1; if(slideNumber==features.length) { slideNumber = 0; }; timeout = setTimeout(function(){startLoop(slideNumber)},100); }; startLoop(slideNumber); The above kicks of the looping. I then have navigation items that, when clicked, I want the above loop to stop, then restart with a new beginning slide: $(myNav).click(function(){ clearTimeout(timeout); timeout = null; startLoop(thisItem); }) If I comment out 'startLoop...' from the click event, it, indeed, stops the initial loop. However, if I leave that last line in, it doesn't actually stop the initial loop. Why? What happens is that both loops seem to run in parallel for a period. So, when I click my navigation, clearTimeout is called, which clears it.

    Read the article

  • Mysql optimization question - How to apply AND logic in search and limit on results in one query?

    - by sandeepan-nath
    This is a little long but I have provided all the database structures and queries so that you can run it immediately and help me. Run the following queries:- CREATE TABLE IF NOT EXISTS `Tutor_Details` ( `id_tutor` int(10) NOT NULL auto_increment, `firstname` varchar(100) NOT NULL default '', `surname` varchar(155) NOT NULL default '', PRIMARY KEY (`id_tutor`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=41 ; INSERT INTO `Tutor_Details` (`id_tutor`,`firstname`, `surname`) VALUES (1, 'Sandeepan', 'Nath'), (2, 'Bob', 'Cratchit'); CREATE TABLE IF NOT EXISTS `Classes` ( `id_class` int(10) unsigned NOT NULL auto_increment, `id_tutor` int(10) unsigned NOT NULL default '0', `class_name` varchar(255) default NULL, PRIMARY KEY (`id_class`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=229 ; INSERT INTO `Classes` (`id_class`,`class_name`, `id_tutor`) VALUES (1, 'My Class', 1), (2, 'Sandeepan Class', 2); CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=18 ; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (1, 'Bob'), (6, 'Class'), (2, 'Cratchit'), (4, 'Nath'), (3, 'Sandeepan'), (5, 'My'); CREATE TABLE IF NOT EXISTS `Tutors_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) default NULL, KEY `Tutors_Tag_Relations` (`id_tag`), KEY `id_tutor` (`id_tutor`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tutors_Tag_Relations` (`id_tag`, `id_tutor`) VALUES (3, 1), (4, 1), (1, 2), (2, 2); CREATE TABLE IF NOT EXISTS `Class_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_class` int(10) default NULL, `id_tutor` int(10) NOT NULL, KEY `Class_Tag_Relations` (`id_tag`), KEY `id_class` (`id_class`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Class_Tag_Relations` (`id_tag`, `id_class`, `id_tutor`) VALUES (5, 1, 1), (6, 1, 1), (3, 2, 2), (6, 2, 2); Following is about the tables:- There are tutors who create classes. Tutor_Details - Stores tutors Classes - Stores classes created by tutors And for searching we are using a tags based approach. All the keywords are stored in tags table (while classes/tutors are created) and tag relations are entered in Tutor_Tag_Relations and Class_Tag_Relations tables (for tutors and classes respectively)like this:- Tags - id_tag tag (this is a a unique field) Tutors_Tag_Relations - Stores tag relations while the tutors are created. Class_Tag_Relations - Stores tag relations while any tutor creates a class In the present data in database, tutor "Sandeepan Nath" has has created class "My Class" and "Bob Cratchit" has created "Sandeepan Class". 3.Requirement The requirement is to return tutor records from Tutor_Details table such that all the search terms (AND logic) are present in the union of these two sets - 1. Tutor_Details table 2. classes created by a tutor in Classes table) Example search and expected results:- Search Term Result "Sandeepan Class" Tutor Sandeepan Nath's record from Tutor Details table "Class" Both the tutors from ... Most importantly, there should be only one mysql query and a LIMIT applicable on the number of results. Following is a working query which I have so far written (It just applies OR logic of search key words instead of the desired AND logic). SELECT td . * FROM Tutor_Details AS td LEFT JOIN Tutors_Tag_Relations AS ttagrels ON td.id_tutor = ttagrels.id_tutor LEFT JOIN Classes AS wc ON td.id_tutor = wc.id_tutor INNER JOIN Class_Tag_Relations AS wtagrels ON td.id_tutor = wtagrels.id_tutor LEFT JOIN Tags AS t ON t.id_tag = ttagrels.id_tag OR t.id_tag = wtagrels.id_tag WHERE t.tag LIKE '%Sandeepan%' OR t.tag LIKE '%Nath%' GROUP BY td.id_tutor LIMIT 20 Please help me with anything you can. Thanks

    Read the article

  • How to use jQuery to assign a class to only one radio button in a group when user clicks on it?

    - by xraminx
    I have the following markup. I would like to add class_A to <p class="subitem-text"> (that holds the radio button and the label) when user clicks on the <input> or <label>. If user clicks some other radio-button/label in the same group, I would like to add class_A to this radio-button's parent paragraph and remove class_A from any other paragraph that hold radio-buttons/labels in that group. Effectively, in each <li>, only one <p class="subitem-text"> should have class_A added to it. Is there a jQuery plug-in that does this? Or is there a simple trick that can do this? <ul> <li> <div class="myitem-wrapper" id="10"> <div class="myitem clearfix"> <span class="number">1</span> <div class="item-text">Some text here </div> </div> <p class="subitem-text"> <input type="radio" name="10" value="15" id="99"> <label for="99">First subitem </label> </p> <p class="subitem-text"> <input type="radio" name="10" value="77" id="21"> <label for="21">Second subitem</label> </p> </div> </li> <li> <div class="myitem-wrapper" id="11"> <div class="myitem clearfix"> <span class="number">2</span> <div class="item-text">Some other text here ... </div> </div> <p class="subitem-text"> <input type="radio" name="11" value="32" id="201"> <label for="201">First subitem ... </label> </p> <p class="subitem-text"> <input type="radio" name="11" value="68" id="205"> <label for="205">Second subitem ...</label> </p> <p class="subitem-text"> <input type="radio" name="11" value="160" id="206"> <label for="206">Third subitem ...</label> </p> </div> </li>

    Read the article

  • js code works for one variable(?), how to make it work for many?

    - by jerry87
    I have working code of js, it shows different div for selected option. <html> <head> <script src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript"> $(document).ready(function(){ $('#box1').hide(); $('#box2').hide(); $('#box3').hide(); $("#thechoices").change(function(){ $("#" + this.value).show().siblings().hide(); }); $("#thechoices").change(); }); </script> </head> <body> <select id="thechoices"> <option value="box1">Box 1</option> <option value="box2">Box 2</option> <option value="box3">Box 3</option> </select> <!-- the DIVs --> <div id="boxes"> <div id="box1"><p>Box 1 stuff...</p></div> <div id="box2"><p>Box 2 stuff...</p></div> <div id="box3"><p>Box 3 stuff...</p></div> </div> </body> </html> Please, I have no js experience, so I don't know how to make it work several times on one page for many "thechoices". Something like copy paste but more suitable than this: <script type="text/javascript"> $(document).ready(function(){ $('#box1').hide(); $('#box2').hide(); $('#box3').hide(); $('#box4').hide(); $('#box5').hide(); $('#box6').hide(); $("#thechoices").change(function(){ $("#" + this.value).show().siblings().hide(); }); $("#thechoices2").change(function(){ $("#" + this.value).show().siblings().hide(); }); $("#thechoices").change(); }); $("#thechoices2").change(); }); </script> </head> <body> <select id="thechoices"> <option value="box1">Box 1</option> <option value="box2">Box 2</option> <option value="box3">Box 3</option> </select> <!-- the DIVs --> <div id="boxes"> <div id="box1"><p>Box 1 stuff...</p></div> <div id="box2"><p>Box 2 stuff...</p></div> <div id="box3"><p>Box 3 stuff...</p></div> </div> <select id="thechoices2"> <option value="box4">1</option> <option value="box5">2</option> <option value="box6">3</option> </select> <!-- the DIVs --> <div id="boxes"> <div id="box4"><p>1 stuff...</p></div> <div id="box5"><p>2 stuff...</p></div> <div id="box6"><p>3 stuff...</p></div> </div> I know you can help me, it seems to be simple but I can't handle with this. How can I change my second code to work in the same way but not only for two selectors. I need it for many. Don't want to copy paste the same section like in my second code.

    Read the article

  • List of all states from COMPOSITE_INSTANCE, CUBE_INSTANCE, DLV_MESSAGE tables

    - by Deepak Arora
    In many of my engagements I get asked repeatedly about the states of the composites in 11g and how to decipher them, especially when we are troubleshooting issues around purging. I have compiled a list of all the states from the COMPOSITE_INSTANCE, CUBE_INSTANCE, DLV_MESSAGE and MEDIATOR_INSTANCE tables. These are the primary tables that are used when using BPEL composites and how they are used with the ECID.  Composite State Values COMPOSITE_INSTANCE States State Description 0 Running 1 Completed 2 Running with faults 3 Completed with faults 4 Running with recovery required 5 Completed with recovery required 6 Running with faults and recovery required 7 Completed with faults and recovery required 8 Running with suspended 9 Completed with suspended 10 Running with faults and suspended 11 Completed with faults and suspended 12 Running with recovery required and suspended 13 Completed with recovery required and suspended 14 Running with faults, recovery required, and suspended 15 Completed with faults, recovery required, and suspended 16 Running with terminated 17 Completed with terminated 18 Running with faults and terminated 19 Completed with faults and terminated 20 Running with recovery required and terminated 21 Completed with recovery required and terminated 22 Running with faults, recovery required, and terminated 23 Completed with faults, recovery required, and terminated 24 Running with suspended and terminated 25 Completed with suspended and terminated 26 Running with faulted, suspended, and terminated 27 Completed with faulted, suspended, and terminated 28 Running with recovery required, suspended, and terminated 29 Completed with recovery required, suspended, and terminated 30 Running with faulted, recovery required, suspended, and terminated 31 Completed with faulted, recovery required, suspended, and terminated 32 Unknown 64 - Normal 0 false false false EN-CA X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Any value in the range of 32 to 63 indicates that the composite instance state has not been enabled, but the instance state is updated for faults, aborts, etc. CUBE_INSTANCE States State Description 0 STATE_INITIATED 1 STATE_OPEN_RUNNING 2 STATE_OPEN_SUSPENDED 3 STATE_OPEN_FAULTED 4 STATE_CLOSED_PENDING_CANCEL 5 STATE_CLOSED_COMPLETED 6 STATE_CLOSED_FAULTED 7 STATE_CLOSED_CANCELLED 8 STATE_CLOSED_ABORTED 9 STATE_CLOSED_STALE 10 STATE_CLOSED_ROLLED_BACK DLV_MESSAGE States State Description 0 STATE_UNRESOLVED 1 STATE_RESOLVED 2 STATE_HANDLED 3 STATE_CANCELLED 4 STATE_MAX_RECOVERED Since now in 11g the Invoke_Messages table is not there so to distinguish between a new message (Invoke) and callback (DLV) and there is DLV_TYPE column that defines the type of message: DLV_TYPE States State Description 1 Invoke Message 2 DLV Message MEDIATOR_INSTANCE STATE Description  0  No faults but there still might be running instances  1  At least one case is aborted by user  2  At least one case is faulted (non-recoverable)  3  At least one case is faulted and one case is aborted  4  At least one case is in recovery required state  5 At least one case is in recovery required state and at least one is aborted  6 At least one case is in recovery required state and at least one is faulted  7 At least one case is in recovery required state, one faulted and one aborted  >=8 and < 16  Running >= 16   Stale In my next blog posting I will walk through the lifecycle of a BPEL process using the above states for the following use cases: - New BPEL process - initial Receive activity - Callback BPEL process - mid-level Receive activity As always comments and questions welcome! Deepak

    Read the article

  • Function calls not working in my page

    - by Vivek Dragon
    I made an select menu that works with the google-font-Api. I made to function in JSBIN here is my work http://jsbin.com/ocutuk/18/ But when i made the copy of my code in a html page its not even loading the font names in page. i tried to make it work but still it is in dead end. This is my html code <!DOCTYPE html> <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"> </script> <meta charset=utf-8 /> <title>FONT API</title> <script> function SetFonts(fonts) { for (var i = 0; i < fonts.items.length; i++) { $('#styleFont') .append($("<option></option>") .attr("value", fonts.items[i].family) .text(fonts.items[i].family)); } } var script = document.createElement('script'); script.src = 'https://www.googleapis.com/webfonts/v1/webfonts?key=AIzaSyB8Ua6XIfe-gqbkE8P3XL4spd0x8Ft7eWo&callback=SetFonts'; document.body.appendChild(script); WebFontConfig = { google: { families: ['ABeeZee', 'Abel', 'Abril Fatface', 'Aclonica', 'Acme', 'Actor', 'Adamina', 'Advent Pro', 'Aguafina Script', 'Akronim', 'Aladin', 'Aldrich', 'Alegreya', 'Alegreya SC', 'Alex Brush', 'Alfa Slab One', 'Alice', 'Alike', 'Alike Angular', 'Allan', 'Allerta', 'Allerta Stencil', 'Allura', 'Almendra', 'Almendra Display', 'Almendra SC', 'Amarante', 'Amaranth', 'Amatic SC', 'Amethysta', 'Anaheim', 'Andada', 'Andika', 'Angkor', 'Annie Use Your Telescope', 'Anonymous Pro', 'Antic', 'Antic Didone', 'Antic Slab', 'Anton', 'Arapey', 'Arbutus', 'Arbutus Slab', 'Architects Daughter', 'Archivo Black', 'Archivo Narrow', 'Arimo', 'Arizonia', 'Armata', 'Artifika', 'Arvo', 'Asap', 'Asset', 'Astloch', 'Asul', 'Atomic Age', 'Aubrey', 'Audiowide', 'Autour One', 'Average', 'Average Sans', 'Averia Gruesa Libre', 'Averia Libre', 'Averia Sans Libre', 'Averia Serif Libre', 'Bad Script', 'Balthazar', 'Bangers', 'Basic', 'Battambang', 'Baumans', 'Bayon', 'Belgrano', 'Belleza', 'BenchNine', 'Bentham', 'Berkshire Swash', 'Bevan', 'Bigelow Rules', 'Bigshot One', 'Bilbo', 'Bilbo Swash Caps', 'Bitter', 'Black Ops One', 'Bokor', 'Bonbon', 'Boogaloo', 'Bowlby One', 'Bowlby One SC', 'Brawler', 'Bree Serif', 'Bubblegum Sans', 'Bubbler One', 'Buda', 'Buenard', 'Butcherman', 'Butterfly Kids', 'Cabin', 'Cabin Condensed', 'Cabin Sketch', 'Caesar Dressing', 'Cagliostro', 'Calligraffitti', 'Cambo', 'Candal', 'Cantarell', 'Cantata One', 'Cantora One', 'Capriola', 'Cardo', 'Carme', 'Carrois Gothic', 'Carrois Gothic SC', 'Carter One', 'Caudex', 'Cedarville Cursive', 'Ceviche One', 'Changa One', 'Chango', 'Chau Philomene One', 'Chela One', 'Chelsea Market', 'Chenla', 'Cherry Cream Soda', 'Cherry Swash', 'Chewy', 'Chicle', 'Chivo', 'Cinzel', 'Cinzel Decorative', 'Clicker Script', 'Coda', 'Coda Caption', 'Codystar', 'Combo', 'Comfortaa', 'Coming Soon', 'Concert One', 'Condiment', 'Content', 'Contrail One', 'Convergence', 'Cookie', 'Copse', 'Corben', 'Courgette', 'Cousine', 'Coustard', 'Covered By Your Grace', 'Crafty Girls', 'Creepster', 'Crete Round', 'Crimson Text', 'Croissant One', 'Crushed', 'Cuprum', 'Cutive', 'Cutive Mono']} }; (function() { var wf = document.createElement('script'); wf.src = ('https:' == document.location.protocol ? 'https' : 'http') + '://ajax.googleapis.com/ajax/libs/webfont/1/webfont.js'; wf.type = 'text/javascript'; wf.async = 'true'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(wf, s); })(); $("#styleFont").change(function (){ var id =$('#styleFont option' +':selected').val(); $("#custom_text").css('font-family',id); }); </script> <style> #custom_text { font-family: Arial; resize: none; margin-top: 20px; width: 500px; } #styleFont { width: 100px; } </style> </head> <body> <select id="styleFont"> </select><br> <textarea id="custom_text"></textarea> </body> </html> How can i make it work. Whats the mistake i am making here.

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance

    - by pinaldave
    Earlier, I had written two articles related to Shrinking Database. I wrote about why Shrinking Database is not good. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server SQL SERVER – What the Business Says Is Not What the Business Wants I received many comments on Why Database Shrinking is bad. Today we will go over a very interesting example that I have created for the same. Here are the quick steps of the example. Create a test database Create two tables and populate with data Check the size of both the tables Size of database is very low Check the Fragmentation of one table Fragmentation will be very low Truncate another table Check the size of the table Check the fragmentation of the one table Fragmentation will be very low SHRINK Database Check the size of the table Check the fragmentation of the one table Fragmentation will be very HIGH REBUILD index on one table Check the size of the table Size of database is very HIGH Check the fragmentation of the one table Fragmentation will be very low Here is the script for the same. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO Let us check the table size and fragmentation. Now let us TRUNCATE the table and check the size and Fragmentation. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can clearly see that after TRUNCATE, the size of the database is not reduced and it is still the same as before TRUNCATE operation. After the Shrinking database operation, we were able to reduce the size of the database. If you notice the fragmentation, it is considerably high. The major problem with the Shrink operation is that it increases fragmentation of the database to very high value. Higher fragmentation reduces the performance of the database as reading from that particular table becomes very expensive. One of the ways to reduce the fragmentation is to rebuild index on the database. Let us rebuild the index and observe fragmentation and database size. -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REBUILD GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can notice that after rebuilding, Fragmentation reduces to a very low value (almost same to original value); however the database size increases way higher than the original. Before rebuilding, the size of the database was 5 MB, and after rebuilding, it is around 20 MB. Regular rebuilding the index is rebuild in the same user database where the index is placed. This usually increases the size of the database. Look at irony of the Shrinking database. One person shrinks the database to gain space (thinking it will help performance), which leads to increase in fragmentation (reducing performance). To reduce the fragmentation, one rebuilds index, which leads to size of the database to increase way more than the original size of the database (before shrinking). Well, by Shrinking, one did not gain what he was looking for usually. Rebuild indexing is not the best suggestion as that will create database grow again. I have always remembered the excellent post from Paul Randal regarding Shrinking the database is bad. I suggest every one to read that for accuracy and interesting conversation. Let us run following script where we Shrink the database and REORGANIZE. -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Shrink the Database DBCC SHRINKDATABASE (ShrinkIsBed); GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REORGANIZE GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can see that REORGANIZE does not increase the size of the database or remove the fragmentation. Again, I no way suggest that REORGANIZE is the solution over here. This is purely observation using demo. Read the blog post of Paul Randal. Following script will clean up the database -- Clean up USE MASTER GO ALTER DATABASE ShrinkIsBed SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO DROP DATABASE ShrinkIsBed GO There are few valid cases of the Shrinking database as well, but that is not covered in this blog post. We will cover that area some other time in future. Additionally, one can rebuild index in the tempdb as well, and we will also talk about the same in future. Brent has written a good summary blog post as well. Are you Shrinking your database? Well, when are you going to stop Shrinking it? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Alternatives for comparing data from different databases

    - by Alex
    I have two huge tables on separate databases. One of them has the information of all the SMS that passed through the company's servers while the other one has the information of the actual billing of those SMS. My job is to compare samples of both of these tables (for example, the records between 1 and 2 pm) to see if there are any differences: SMS that were sent but not charged to the user for whatever reason that may be happening. The columns I will be using to compare are the remitent's phone number and the exact date the SMS was sent. An issue here is that dates usually are the same on both sides, but in many cases differ by 1 or 2 seconds. I have, so far, two alternatives to do this: (PL/SQL) Create two tables where i'm going to temporarily store all the records of that 1hour sample. One for each of the main tables. Then, for each distinct phone number, select the time of every SMS sent from that phone from both my temporary tables and start comparing one by one using cursors. In this case, the procedure would be ran on the server where one of the sources is so the contents of the other one would be looked up using a dblink. (sqlplus + c++) Instead of storing the 1hour samples in new tables, output the query to a text file. I will have two text files, one for each source. Then, open the first file and load all of it's content on a hash_map (key-value) using c++, where the key will be the phone number and the value a list of times of SMS sent from that phone. Finally, open the second file, grab each line (in this format: numberX timeX), look for numberX's entry on the hash_map (wich will be a list of times) and then check if timeX is on that list. If it isn't, save it somewhere to finally store it on a "uncharged" table (this would also be the final step on case 1) My main concern is efficiency. These samples have about 2 million records on each source, so just grabbing one record on one side and looking it up on the other would not be possible. That's the reason I wanted to use hash_maps Which do you think is a better option?

    Read the article

  • Blogging locally and globally–my experience

    - by DigiMortal
    In Baltic MVP Summit 2011 there was discussion about having two blogs - one for local and another for global audience – and how to publish once written information in these blogs. There are many ways how to optimize your blogging activities if you have more than one audience and here you can find my experiences, best practices and advices about this topic. My two blogs I have to working blogs: this one here technology and programming blog for local market My local blog is almost five years old and it makes it one of the oldest company blogs in Estonia. It is still active and I write there as much as I have time for it. This blog here is active since September 2007, so it is about 3.5 years old right now. Both of these blogs are  my major hits in my MVP carrier and they have very good web statistics too. My local blog My local blog is about programming, web and technology. It has way wider target audience then this blog here has. By example, in my local blog I blog also about local events, cool new concept phones, different webs providing some interesting services etc. But local guys can find there also my postings about how to solve one or another programming problem and postings about Microsoft technologies I am playing with. This far my local blog has a lot of readers for such a small country that Estonia is. This blog has made me a lot of cool contacts and I have had there a lot of interesting discussions about different technical topics. Why I started this blog? Living in small country is different than living in big country. In small country you have less people and therefore smaller audience so you have to target more than one technical topic to find enough readers. In a same time you are still interested in your main topics and you want to reach to more people who are sharing same interests with you. Practically one day y will grow out from local market and you go global. This is how this blog was born. Was it worth to create, promote and mess with it? Every second I have put on my time to this blog has been worth of it. Thanks to this blog I have found new good friends and without them I think it is more boring to work on different problems and solutions. Defining target audiences One thing you should always do when having more than one blog is defining target audiences. If you are just technomaniac interested in sharing your stuff and make some new friends and have something to write to your MVP nomination form then you don’t have to go through complex targeting process. You can do it simple way and same effectively. Here is how I defined target audiences to my blogs: local blog – reader of my local blog is IT professional, software developer, technology innovator or just some guy who is interested in technology,   this blog – reader of this blog is experienced professional software developer who works on Microsoft technologies or software developer who is open minded and open to new technologies and interesting solutions to development problems. You can see how local blog – due to small market with less people – has wider definition for audience while this blog is heavily targeted to Microsoft technologies and specially to software development. On practical side these decisions are also made well I think because it is very hard to build up popular common IT blog. On global level it is better to target some specific niche and find readers who are professionals on your favorite topics. Thanks to this blog I have found new friends who are professional developers and I am very happy about all the discussions I have had with them. Publishing content to different blogs My local blog and this blog have some overlapping topics like .NET, databases and SEO. Due to this overlapping there is question: when I write posting to my local blog then should I have to publish same thing in my global blog? And if I write something to my global blog then should I publish same thing also in my local blog? Well, it really depends on the definition of your target audiences. If they match then of course it is good idea to translate you post and publish it also to another blog. But if you have different audiences then you may need to modify your posting before publishing it. The questions you have to answer are: is target audience interested in this topic? is target audience expecting more specific and deeper handling of this topic or are they expecting more general handling of topic? is the problem you are discussing actual for target audience or not? You have to answer these questions and after that make your decision. If you need to modify your original posting then take some time and do it. Provide quality to all your readers because they will respect you if you respect them. Cross-posting and referencing It is tempting to save time that preparing some blog post takes and if you have are done with posting in one blog it may seem like good idea to make short posting to another blog and add reference to first one where topic is discussed longer. Well, don’t do it – all your readers expect good quality content from you and jumping from one blog post to another is disturbing for them. Of course, there is problem with differences between target audiences. You may have wider target audience and some people may be interested in more specific handling of topic. In this case feel free to refer your blog you are writing in english. This is not working very well in opposite direction because almost all my global blog readers understand english but not estonian. By example, estonian language is complex one and online translating tools make very poor translations from estonian language. This is why I don’t even plan to publish postings here that refer to my local blog for more information. I am keeping these two blogs as two different worlds and if there is posting that fits well to both blogs I will write my posting to one blog and then answer previous three questions before posting same thing to another blog. Conclusion Growing out of your local market is not anything mysterious if you are living in small country. As it is harder to find people there who are interested in same topics with you then sooner or later you will start finding these new contacts from global audience. Global audience is bigger and to be visible there you must provide high quality content to your audience. It is something you will learn over time and you will learn every day something new when you are posting to your global blog. You may ask: if global blog is much more complex thing to do then is it worth to do at all? My answer is: yes, do it for sure. It is not easy thing to do when you start but if you work on your global blog and improve it over time you will get over all obstacles pretty soon. Just don’t forget one thing – content is king and your readers expect high quality from you.

    Read the article

  • When to use Euler vs Axis angles vs Quaternions?

    - by manning18
    I understand the theory behind each but I was wondering if people could share their experiences in when one would use one over the other For instance, if you were implementing a chase camera, a FPS-style mouse look or writing some kinematic routine, what would be the factors you consider to go with one type over the other and when might you need to convert from one form of representation to the other? Are there certain things that only one system can do that the others can't? (eg smooth interpolation with quaternions)

    Read the article

  • Lazy Processing of Streams

    - by Giorgio
    I have the following problem scenario: I have a text file and I have to read it and split it into lines. Some lines might need to be dropped (according to criteria that are not fixed). The lines that are not dropped must be parsed into some predefined records. Records that are not valid must be dropped. Duplicate records may exist and, in such a case, they are consecutive. If duplicate / multiple records exist, only one item should be kept. The remaining records should be grouped according to the value contained in one field; all records belonging to the same group appear one after another (e.g. AAAABBBBCCDEEEFF and so on). The records of each group should be numbered (1, 2, 3, 4, ...). For each group the numbering starts from 1. The records must then be saved somewhere / consumed in the same order as they were produced. I have to implement this in Java or C++. My first idea was to define functions / methods like: One method to get all the lines from the file. One method to filter out the unwanted lines. One method to parse the filtered lines into valid records. One method to remove duplicate records. One method to group records and number them. The problem is that the data I am going to read can be too big and might not fit into main memory: so I cannot just construct all these lists and apply my functions one after the other. On the other hand, I think I do not need to fit all the data in main memory at once because once a record has been consumed all its underlying data (basically the lines of text between the previous record and the current record, and the record itself) can be disposed of. With the little knowledge I have of Haskell I have immediately thought about some kind of lazy evaluation, in which instead of applying functions to lists that have been completely computed, I have different streams of data that are built on top of each other and, at each moment, only the needed portion of each stream is materialized in main memory. But I have to implement this in Java or C++. So my question is which design pattern or other technique can allow me to implement this lazy processing of streams in one of these languages.

    Read the article

  • How can we copy the column data of one datatable to another,even if there is different column names

    - by Harikrishna
    I have two datatables. First is DataTable NameAdressPhones = new DataTable(); with Three columns Name,Adress and PhoneNo.But I want only two columns Name and Adress data so I am copy those columns (with data) to the new datatable DataTable NameAdress = new DataTable(); For that I do foreach (DataRow sourcerow in NameAdressPhones.Rows) { DataRow destRow = NameAdress.NewRow(); foreach (string colname in columns) { destRow[colname] = sourcerow[colname]; } NameAdress.Rows.Add(destRow); } Now I clear every time the NameAdressPhones(first) datatable when there are new records in the table.And every time there will be same no of columns but column name will be different like Nm instead of Name,Add instead of Address.Now problem is second datatable have already column names Name and Address and now I want to copy the columns data of Nm and Add to the second datatabel but the column names are different to the second datatable.So even If there is different column names I want to copy Nm column data of first datatable to the column Name of second datatable and column Add data of first datatable to column Address of second datatable.

    Read the article

  • May one create an open source scripting application using QtScript?

    - by Manuel
    Hi there, I'd like to implement my own script engine using the QtScript component and other Qt components. Since this should be an open source (GPL) application I thought I would be free to do this. But now I found a page at Qt's website that made me doubtful about it: What are the restrictions with releasing scriptable applications? Unless the scripter has a Qt license, the following restrictions apply: The application itself may not primarily be a script development tool. The Qt API may not be made directly scriptable. Scripts may not alter the main functionality of the application. If the major part of the application is developed with a compiled programming language and only some non-core parts of the application are made extendable/modifiable through Qt Script, the scripter does not need to purchase a Qt or QSA license. Does this text apply to my project or not? Cheers, Manuel

    Read the article

  • How to pass textbox value from one webform to a xtrareport ?

    - by ahmed
    Hello, I have a web form where I have a textbox in which the user will enter the number and pull the information from the table. Now I have developed a xtrareport, where I have to display the data of which the user enters in that textbox which I mentioned earlier. Everything works fine, only I need to just pass the value of the texbox(form1) to the report (form2). Now what I need is how to pass the textbox value as a parameter to the report and display the report data of the selected number.

    Read the article

  • I am getting this error on each machine after installing ruby and rails, I created one web site and

    - by Santodsh
    D:\PROJECTS\RubyOnRail\webapp\Welcome>ruby script\server => Booting WEBrick => Rails 2.3.4 application starting on http://0.0.0.0:3000 => Call with -d to detach => Ctrl-C to shutdown server [2010-01-31 21:19:34] INFO WEBrick 1.3.1 [2010-01-31 21:19:34] INFO ruby 1.8.6 (2007-09-24) [i386-mswin32] [2010-01-31 21:19:34] INFO WEBrick::HTTPServer#start: pid=6576 port=3000 /!\ FAILSAFE /!\ Sun Jan 31 21:19:38 +0530 2010 Status: 500 Internal Server Error uninitialized constant Encoding c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:443:in `load_missing_constant' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:80:in `const_missing' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:92:in `const_missing' c:/ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.6/lib/sqlite3/encoding.rb:9:in `f ind' c:/ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.6/lib/sqlite3/database.rb:69:in ` initialize' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/sqlite3_adapter.rb:13:in `new' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/sqlite3_adapter.rb:13:in `sqlite3_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:223:in `send' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:223:in `new_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:245:in `checkout_new_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:188:in `checkout' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:184:in `loop' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:184:in `checkout' c:/ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:183:in `checkout' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:98:in `connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:326:in `retrieve_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_specification.rb:123:in `retrieve_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_specification.rb:115:in `connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/query_ca che.rb:9:in `cache' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/query_ca che.rb:28:in `call' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:361:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/head.rb:9:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/methodoverride.rb:24:in ` call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/params _parser.rb:15:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/sessio n/cookie_store.rb:93:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/failsa fe.rb:26:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `synchroniz e' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/dispat cher.rb:114:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/reload er.rb:34:in `run' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/dispat cher.rb:108:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/rack/static.rb:31:in `c all' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:46:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `each' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/rack/log_tailer.rb:17:i n `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/content_length.rb:13:in ` call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:50:in `service' c:/ruby/lib/ruby/1.8/webrick/httpserver.rb:104:in `service' c:/ruby/lib/ruby/1.8/webrick/httpserver.rb:65:in `run' c:/ruby/lib/ruby/1.8/webrick/server.rb:173:in `start_thread' c:/ruby/lib/ruby/1.8/webrick/server.rb:162:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:162:in `start_thread' c:/ruby/lib/ruby/1.8/webrick/server.rb:95:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:92:in `each' c:/ruby/lib/ruby/1.8/webrick/server.rb:92:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:23:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:82:in `start' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:14:in `run' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/server.rb:111 c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_origina l_require' c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' script/server:3 /!\ FAILSAFE /!\ Sun Jan 31 21:19:39 +0530 2010 Status: 500 Internal Server Error uninitialized constant Encoding c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:443:in `load_missing_constant' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:80:in `const_missing' c:/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.4/lib/active_support/depend encies.rb:92:in `const_missing' c:/ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.6/lib/sqlite3/encoding.rb:9:in `f ind' c:/ruby/lib/ruby/gems/1.8/gems/sqlite3-0.0.6/lib/sqlite3/database.rb:69:in ` initialize' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/sqlite3_adapter.rb:13:in `new' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/sqlite3_adapter.rb:13:in `sqlite3_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:223:in `send' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:223:in `new_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:245:in `checkout_new_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:188:in `checkout' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:184:in `loop' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:184:in `checkout' c:/ruby/lib/ruby/1.8/monitor.rb:242:in `synchronize' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:183:in `checkout' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:98:in `connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:326:in `retrieve_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_specification.rb:123:in `retrieve_connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_specification.rb:115:in `connection' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/query_ca che.rb:9:in `cache' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/query_ca che.rb:28:in `call' c:/ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.4/lib/active_record/connecti on_adapters/abstract/connection_pool.rb:361:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/head.rb:9:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/methodoverride.rb:24:in ` call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/params _parser.rb:15:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/sessio n/cookie_store.rb:93:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/failsa fe.rb:26:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `synchroniz e' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/dispat cher.rb:114:in `call' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/reload er.rb:34:in `run' c:/ruby/lib/ruby/gems/1.8/gems/actionpack-2.3.4/lib/action_controller/dispat cher.rb:108:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/rack/static.rb:31:in `c all' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:46:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `each' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/urlmap.rb:40:in `call' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/rails/rack/log_tailer.rb:17:i n `call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/content_length.rb:13:in ` call' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:50:in `service' c:/ruby/lib/ruby/1.8/webrick/httpserver.rb:104:in `service' c:/ruby/lib/ruby/1.8/webrick/httpserver.rb:65:in `run' c:/ruby/lib/ruby/1.8/webrick/server.rb:173:in `start_thread' c:/ruby/lib/ruby/1.8/webrick/server.rb:162:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:162:in `start_thread' c:/ruby/lib/ruby/1.8/webrick/server.rb:95:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:92:in `each' c:/ruby/lib/ruby/1.8/webrick/server.rb:92:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:23:in `start' c:/ruby/lib/ruby/1.8/webrick/server.rb:82:in `start' c:/ruby/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:14:in `run' c:/ruby/lib/ruby/gems/1.8/gems/rails-2.3.4/lib/commands/server.rb:111 c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_origina l_require' c:/ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' script/server:3

    Read the article

< Previous Page | 218 219 220 221 222 223 224 225 226 227 228 229  | Next Page >