Search Results

Search found 68400 results on 2736 pages for 'industry data model'.

Page 261/2736 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • Linq to DataTable without enumerating fields

    - by Luciano
    Hi, i´m trying to query a DataTable object without specifying the fields, like this : var linqdata = from ItemA in ItemData.AsEnumerable() select ItemA but the returning type is System.Data.EnumerableRowCollection<System.Data.DataRow> and I need the following returning type System.Data.EnumerableRowCollection<<object,object>> (like the standard anonymous type) Any idea? Thanks

    Read the article

  • data breakpoints in avr studio

    - by Art Spasky
    I want to set data breakpoint for TCNT1 register of ATMega16 in AVR Studio 4.17 Build 666. I add breakpoint by specifing in the Location field of "Add data breakpoint" window the value IO@0x2C. But breakpoint seems not work. Can some one help me how to setup a data breakpoint for an IO reginster?

    Read the article

  • Implementing the tree with reference to the root for each leaf

    - by AntonAL
    Hi, i implementing a products catalog, that looks, like this: group 1 subgroup 1 subgroup 2 item 1 item 2 ... item n ... subgroup n group 2 subgroup 1 ... subgroup n group 3 ... group n The Models: class CatalogGroup < ActiveRecord::Base has_many: catalog_items has_many :catalog_items_all, :class_name => "CatalogItem", :foreign_key => "catalog_root_group_id" end class CatalogItem < ActiveRecord::Base belongs_to :catalog_group belongs_to :catalog_root_group, :class_name => "CatalogGroup" end Migrations: class CreateCatalogItems < ActiveRecord::Migration def self.up create_table :catalog_items do |t| t.integer :catalog_group_id t.integer :catalog_root_group_id t.string :code t.timestamps end end For convenience, i referenced each CatalogItem to it's top-most CatalogGroup and named this association "catalog_root_group". This will give us the simple implementation of search request, like "show me all items in group 1". We will have a deal only with CatalogModel.catalog_root_group The problem is - this association does't work. I always get "catalog_root_group" equals to nil Also, i have tried to overcome the using of reference to root group ("catalog_root_group"), but i cannot construct appropriate search request in ruby ... Do you know, how to do it ?

    Read the article

  • SQLServer Binary Data with ActiveRecord and JDBC

    - by John Duff
    I'm using the activerecord-jdbc-adapter with ActiveRecord to be able to access a SQLServer database for Rails Application running under jRuby and am having trouble inserting binary data. The Exception I am getting is below. Note I just have a blurb for the binary data from the fixtures that was working fine for MySQL. ActiveRecord::StatementInvalid: ActiveRecord::ActiveRecordError: Operand type clash: nvarchar is incompatible with image: INSERT INTO blobstorage_datachunks ([id], [datafile_id], [chunk_number], [data]) VALUES (369397133, 663419003, 0, N'GIF89a@') When I created the tables the migration had binary and SQLServer used Image instead. We're using Rails 2.3.5, SQLServer Express 2008. What I'm looking for is a way to get the binary data into SQLServer with ActiveRecord. Thanks in advance for the help.

    Read the article

  • Is this a situation where Qt Model/View architecture is not useful?

    - by csmithmaui
    Hi, I am writing a GUI based application where I read a string of values from serial port every few seconds and I need to display most of the values in some type graphical indicator(I was thinking of QprogressBar maybe) that displays the range and the value. Some of the other data that I am parsing from the string are the date and fault codes. Also, the data is hierarchical. I wanted to use the model/view architecture of Qt because I have been interested in MVC stuff for a while but have never quite wrapped my brain around how to implement it very well. As of now, I have subclassed QAbstractItemModel and in the model I read the serial port and wrap the items parsed from the string in a Tree data structure. I can view all of the data in a QtreeView with no issues. I have also began to subclass QAbstractItemView to build my custom view with all of the Graphical Indicators and such. This is where I am getting stuck. It seems to me that in order for me to design a view that knows how to display my custom model the view needs to know exactly how all of the data in the model is organized. Doesn't that defeat the purpose of Model/View? The QTreeView I tested the model with is basically just displaying the model as it is setup in the Tree structure but I don't want to do that because the data is not all of the same type. Is the type of data or the way you would like to present it to the user a determining factor in whether or not you should use this architecture? I always assumed it was just always better to design in an MVC style. It seems to me like it might have been better to just subclass QWidget and then read in from the serial port and update all of subwidgets(graphical indicators, labels, etc...) from the subclass. Essentially, do everything in one class. Does anybody understand this issue that can explain to me either what I am missing or why I shouldn't be doing it this way. As of now I am a little confused. Thanks so much for any help!

    Read the article

  • flex: cant edit item in Datagrid with override set data method

    - by Markus
    Hi everybody, I've a custom itemRenderer for my datagrid. To set the actual data I use the following method: override public function set data(side:Object):void{ ... } As soon as I use this function the cell doesn't show up any item Editor anymore. Why is that? When I remove this function the itemEditor is working but with the wrong initialization data... What's the proper way to handle this? Thanks, Markus

    Read the article

  • GCC (ld) option to strip unreferenced data/functions

    - by legends2k
    I've written an program which uses a library which has numerous functuions, but I only limited functions from it. GCC is the compiler I use. Once I've created a binary, when I used nm to see the symbols in it, it shows all the unwanted (unreferenced) functions which are never used. How do I removed those unreferenced functions and data from the executable? Is the -s option right? I'm tols that it strips all symbol table and relocation data from the binary, but does this remove the function and data too? I'm not sure on how to verify this too, since after using -s nm doesn't work since it's stripped all sym. table data too.

    Read the article

  • Convert db data encoding to UTF-8

    - by lnetanel
    Hi, I use Sql Server 2000 and I inserted some data in hebrew to a table. I need to access this data through an iPhone app using an ASP page that query the table. The problem is that in the iPhone app the Hebrew is shown as strange signs. I think my problem is that the data that is generated from my db isn't in UTF-8 but in USC-2 Any suggestions how to convert the data from my db to utf so it will be readable on the iPhone? 10x.

    Read the article

  • Getting GPS data?

    - by svebee
    Inside public class IAmHere extends Activity implements LocationListener { i have @Override public void onLocationChanged(Location location) { // TODO Auto-generated method stub } @Override public void onProviderDisabled(String provider) { // TODO Auto-generated method stub } @Override public void onProviderEnabled(String provider) { // TODO Auto-generated method stub } @Override public void onStatusChanged(String provider, int status, Bundle extras) { // TODO Auto-generated method stub } and inside public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.iamhere); i have LocationManager lm = (LocationManager) getSystemService(Context.LOCATION_SERVICE); List<String> providers = lm.getProviders(true); /* Loop over the array backwards, and if you get an accurate location, then break out the loop*/ Location l = null; for (int i=providers.size()-1; i>=0; i--) { l = lm.getLastKnownLocation(providers.get(i)); if (l != null) break; } double[] gps = new double[2]; if (l != null) { gps[0] = l.getLatitude(); gps[1] = l.getLongitude(); } gpsString = (TextView)findViewById(R.id.gpsString); String Data = ""; String koordinata1 = Double.toString(gps[0]); String koordinata2 = Double.toString(gps[1]); Data = Data + koordinata1 + " | " + koordinata2 + "\n"; gpsString.setText(String.valueOf(Data)); but seems it's not working? Why? I mean even emulator doesn't want to send GPS data - When I click "send" via UI or console, nothing happens...? Thank you.

    Read the article

  • Transform OpenCV image data type to Devil image format and vice-verca

    - by D.K
    I want to use a CUDA-enabled SIFT library but I am using the OpenCV driver to get images from the webcam? The Cuda library is using the Devil Library for image data types. Should I transofrm the images from OpenCV data types to Devil? Or Should I use another method for getting images from the webcam[devil compatible data types]? Thanks for your attention

    Read the article

  • GWT, MVP, and UIBinding - How to get the best of all worlds

    - by Stephane Grenier
    With MVP, you normally bind the View (UI) with the Presenter in the Presenter. However with the latest version of GWT, especially with UIBinding, you can do the following in the View: @UiHandler("loginButton") void onAboutClicked(ClickEvent event) { // my login code } Which basically exchanges a lot of anonymous inner class code for some quick annotation code. Very nice!! The problem is that this code is in the view and not the presenter... So I thought maybe: @UiHandler("loginButton") void onAboutClicked(ClickEvent event) { myPresenter.onAboutClicked(...); } But there are several problems with this approach. The most important, you blur the lines between View and Presenter. Who does which binding, in some cases it's the View, in others it's the presenter (binding to events not in your current view but that need to be attached - for example a system wide update event). You still get the benefit of being able to unit test your presenter, but at what cost. The responsibilities are messy now. For example the binding is sometimes in the View and others times in the Presenter level. I can see the code falling into all kinds of chaos with time. I also thought of extending the Presenter to the View, so that you could do this in the View. The problem here is that you lose the Presenter's ability to run standard unit tests! That's a major issue. That and the lines again become blurred. So my question, does anyone have a good method of taking advantage of the annotation from UIBinding within the MVP pattern without blurring the lines and losing the advantages of the MVP pattern?

    Read the article

  • how to insert data if it contain apostrophe ?

    - by angel ansari
    Actally my task is load csv file into sql server using c# so i have split it by comma my problem is that some field's data contain apostrop and i m firing insert query to load data into sql so its give error my coding like that using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.IO; using System.Data.SqlClient; namespace tool { public partial class Form1 : Form { StreamReader reader; SqlConnection con; SqlCommand cmd; int count = 0; //int id=0; FileStream fs; string file = null; string file_path = null; SqlCommand sql_del = null; public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { OpenFileDialog file1 = new OpenFileDialog(); file1.ShowDialog(); textBox1.Text = file1.FileName.ToString(); file = Path.GetFileName(textBox1.Text); file_path = textBox1.Text; fs = new FileStream(file_path, FileMode.Open, FileAccess.Read); } private void button2_Click(object sender, EventArgs e) { if (file != null ) { sql_del = new SqlCommand("Delete From credit_debit1", con); sql_del.ExecuteNonQuery(); reader = new StreamReader(file_path); string line_content = null; string[] items = new string[] { }; while ((line_content = reader.ReadLine()) != null) { if (count >=4680) { items = line_content.Split(','); string region = items[0].Trim('"'); string station = items[1].Trim('"'); string ponumber = items[2].Trim('"'); string invoicenumber = items[3].Trim('"'); string invoicetype = items[4].Trim('"'); string filern = items[5].Trim('"'); string client = items[6].Trim('"'); string origin = items[7].Trim('"'); string destination = items[8].Trim('"'); string agingdate = items[9].Trim('"'); string activitydate = items[10].Trim('"'); if ((invoicenumber == "-") || (string.IsNullOrEmpty(invoicenumber))) { invoicenumber = "null"; } else { invoicenumber = "'" + invoicenumber + "'"; } if ((destination == "-") || (string.IsNullOrEmpty(destination))) { destination = "null"; } else { destination = "'" + destination + "'"; } string vendornumber = items[11].Trim('"'); string vendorname = items[12].Trim('"'); string vendorsite = items[13].Trim('"'); string vendorref = items[14].Trim('"'); string subaccount = items[15].Trim('"'); string osdaye = items[16].Trim('"'); string osaa = items[17].Trim('"'); string osda = items[18].Trim('"'); string our = items[19].Trim('"'); string squery = "INSERT INTO credit_debit1" + "([id],[Region],[Station],[PONumber],[InvoiceNumber],[InvoiceType],[FileRefNumber],[Client],[Origin],[Destination], " + "[AgingDate],[ActivityDate],[VendorNumber],[VendorName],[VendorSite],[VendorRef],[SubAccount],[OSDay],[OSAdvAmt],[OSDisbAmt], " + "[OverUnderRecovery] ) " + "VALUES " + "('" + count + "','" + region + "','" + station + "','" + ponumber + "'," + invoicenumber + ",'" + invoicetype + "','" + filern + "','" + client + "','" + origin + "'," + destination + "," + "'" + (string)agingdate.ToString() + "','" + (string)activitydate.ToString() + "','" + vendornumber + "',' " + vendorname + "',' " + vendorsite + "',' " + vendorref + "'," + "'" + subaccount + "','" + osdaye + "','" + osaa + "','" + osda + "','" + our + "') "; cmd = new SqlCommand(squery, con); cmd.CommandTimeout = 1500; cmd.ExecuteNonQuery(); } label2.Text = count.ToString(); Application.DoEvents(); count++; } MessageBox.Show("Process completed"); } else { MessageBox.Show("path select"); } } private void button3_Click(object sender, EventArgs e) { this.Close(); } private void Form1_Load(object sender, EventArgs e) { con = new SqlConnection("Data Source=192.168.50.200;User ID=EGL_TEST;Password=TEST;Initial Catalog=EGL_TEST;"); con.Open(); } } } vendername field contain data (MCCOLLISTER'S TRANSPORTATION) so how to pass this data

    Read the article

  • Attach data to Richtext using flex, mysql and php(newbie)

    - by dmschenk
    I'm trying to attach data to a richtext field in Flashbuilder. The data is an html string in a mysql database and I'm connecting using php. The php script is just converting to XML. The connection works because I can easily dump the data to a datagrid and I can see the string. when I try to hook the data to a Richtext field I get "object CallResponder" so I am connected, but I'm not sure how to break it down so that it is just an html string for the text field. Thanks protected function creationCompleteHandler(event:FlexEvent):void { getAllAbout_fxResult.token = aboutfxService.getAllAbout_fx(); } <s:RichText x="10" y="10" text="{getAllAbout_fxResult}" creationComplete="creationCompleteHandler(event)" width="300" height="300"/>

    Read the article

  • Nullable<> as TModel for ViewPage

    - by Alexander Prokofyev
    What are the possible reasons what Nullable<> types are disallowed to be passed as TModel parameter of System.Web.Mvc.ViewPage<TModel> generic? This could be handy sometimes. In ASP.NET MVC source defined what TModel should be a class: public class ViewPage<TModel> : ViewPage where TModel : class but Nullable types are value types. Maybe definition could be less restrictive...

    Read the article

  • Computation on db data then list them using either SimpleCursorAdapter or ArrayAdapter

    - by kc2uno
    Hi all, I juststarted programming in android a few weeks ago, so I am not entirely sure how to deal with listing values. Please help me out! I have some questions regarding displaying data sets from db in a list. Currently I have a cursor returned by my db points to a list of rows and I want display 2 columns values in a single row of the list. The row xml looks like this: <TextView android:id="@+id/text1" android:textSize="16sp" android:textStyle="bold" android:layout_width="fill_parent" android:layout_height="wrap_content"/> <TextView android:id="@+id/text2" android:textSize="14sp" android:layout_width="fill_parent" android:layout_height="wrap_content"/> so I was thinking using simplecursoradapter which supposedly makes my life easier by displaying the data in a list. However that is only true if I want to display the raw data. For the purpose of my program I need to do some computations on the raw data sets, then display them. I am not sure how to do that using SimpleCursorAdapter. Here's how I display the raw data: String[] from = new String[]{BtDbAdapter.KEY_EX_TYPE,BtDbAdapter.KEY_EX_TIMESTAMP}; int[] to = new int[]{R.id.text1, R.id.text2}; // Now create a simple cursor adapter and set it to display SimpleCursorAdapter records = new SimpleCursorAdapter(this, R.layout.exset_row, mExsetCursor, from, to); setListAdapter(records); Is there a way to do computation on the data in those rows before I bind it with the SimpleCursorAdapter? I was trying to use an alternative way of doing this by using arraylist and arrayadapter, but that way I dont know to how achieve displaying 2 items in a single row. This is my code for using arrayadapter which only display 1 text in a row instead of 2 textviews in a row: //fill in the array timestamp_arr = new ArrayList<String>(); type_arr = new ArrayList<String>(); fillRecord(); Log.d(TAG,"setting now in recordlist"); setListAdapter(new ArrayAdapter<String>(this, R.layout.list_item,timestamp_arr)); setListAdapter(new ArrayAdapter<String>(this, R.layout.list_item2,type_arr)); It's very obvious that it only displays one textview in a row because I set the second arrayadapter overwrites the first one! I was trying to use R.id.text1 and R.id.text2 for them, but it gave me some errors saying 04-23 01:40:58.658: ERROR/AndroidRuntime(3309): android.content.res.Resources$NotFoundException: Resource ID #0x7f070008 type #0x12 is not valid I believe the second method can achieve this, but I'm not sure how do deal with the layout problems, so if you any suggestions, please post them out. Thank you!!

    Read the article

  • How can we secure our data from DBA?

    - by KoolKabin
    Hi guys, I have very confidential data in my database. I am trying to secure my data from dba. I am a member of development team. We develop our software and delpoy in a server which has its own dba. We have limited control over the server. In this scenario how can i deny dba of the server to lookup my data and deny making changes to them. Is it possible?

    Read the article

  • setIncludesSubentities: in an NSFetchRequest is broken for entities across multiple persistent store

    - by SG
    Prior art which doesn't quite address this: http://stackoverflow.com/questions/1774359/core-data-migration-error-message-model-does-not-contain-configuration-xyz I have narrowed this down to a specific issue. It takes a minute to set up, though; please bear with me. The gist of the issue is that a persistentStoreCoordinator (apparently) cannot preserve the part of an object graph where a managedObject is marked as a subentity of another when they are stored in different files. Here goes... 1) I have 2 xcdatamodel files, each containing a single entity. In runtime, when the managed object model is constructed, I manually define one entity as subentity of another using setSubentities:. This is because defining subentities across multiple files in the editor is not supported yet. I then return the complete model with modelByMergingModels. //Works! [mainEntity setSubentities:canvasEntities]; NSLog(@"confirm %@ is super for %@", [[[canvasEntities lastObject] superentity] name], [[canvasEntities lastObject] name]); //Output: "confirm Note is super for Browser" 2) I have modified the persistentStoreCoordinator method so that it sets a different store for each entity. Technically, it uses configurations, and each entity has one and only one configuration defined. //Also works! for ( NSString *configName in [[HACanvasPluginManager shared].registeredCanvasTypes valueForKey:@"viewControllerClassName"] ) { storeUrl = [NSURL fileURLWithPath:[[self applicationDocumentsDirectory] stringByAppendingPathComponent:[configName stringByAppendingPathExtension:@"sqlite"]]]; //NSLog(@"entities for configuration '%@': %@", configName, [[[self managedObjectModel] entitiesForConfiguration:configName] valueForKey:@"name"]); //Output: "entities for configuration 'HATextCanvasController': (Note)" //Output: "entities for configuration 'HAWebCanvasController': (Browser)" if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:configName URL:storeUrl options:options error:&error]) //etc 3) I have a fetchRequest set for the parent entity, with setIncludesSubentities: and setAffectedStores: just to be sure we get both 1) and 2) covered. When inserting objects of either entity, they both are added to the context and they both are fetched by the fetchedResultsController and displayed in the tableView as expected. // Create the fetch request for the entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; [fetchRequest setEntity:entity]; [fetchRequest setIncludesSubentities:YES]; //NECESSARY to fetch all canvas types [fetchRequest setSortDescriptors:sortDescriptors]; [fetchRequest setFetchBatchSize:20]; // Set the batch size to a suitable number. [fetchRequest setAffectedStores:[[managedObjectContext persistentStoreCoordinator] persistentStores]]; [fetchRequest setReturnsObjectsAsFaults:NO]; Here is where it starts misbehaving: after closing and relaunching the app, ONLY THE PARENT ENTITY is fetched. If I change the entity of the request using setEntity: to the entity for 'Note', all notes are fetched. If I change it to the entity for 'Browser', all the browsers are fetched. Let me reiterate that during the run in which an object is first inserted into the context, it will appear in the list. It is only after save and relaunch that a fetch request fails to traverse the hierarchy. Therefore, I can only conclude that it is the storage of the inheritance that is the problem. Let's recap why: - Both entities can be created, inserted into the context, and viewed, so the model is working - Both entities can be fetched with a single request, so the inheritance is working - I can confirm that the files are being stored separately and objects are going into their appropriate stores, so saving is working - Launching the app with either entity set for the request works, so retrieval from the store is working - This also means that traversing different stores with the request is working - By using a single store instead of multiple, the problem goes away completely, so creating, storing, fetching, viewing etc is working correctly. This leaves only one culprit (to my mind): the inheritance I'm setting with setSubentities: is effective only for objects creating during the session. Either objects/entities are being stored stripped of the inheritance info, or entity inheritance as defined programmatically only applies to new instances, or both. Either of these is unacceptable. Either it's a bug or I am way, way off course. I have been at this every which way for two days; any insight is greatly appreciated. The current workaround - just using a single store - works completely, except it won't be future-proof in the event that I remove one of the models from the app etc. It also boggles the mind because I can't see why you would have all this infrastructure for storing across multiple stores and for setting affected stores in fetch requests if it by core definition (of setSubentities:) doesn't work.

    Read the article

  • ObjectContext ConnectionString Sqlite

    - by codegarten
    I need to connect to a database in Sqlite so i downloaded and installed System.Data.SQLite and with the designer dragged all my tables. The designer created a .cs file with public class Entities : ObjectContext and 3 constructors: 1st public Entities() : base("name=Entities", "Entities") this one load the connection string from App.config and works fine. App.config <connectionStrings> <add name="Entities" connectionString="metadata=res://*/Db.TracModel.csdl|res://*/Db.TracModel.ssdl|res://*/Db.TracModel.msl;provider=System.Data.SQLite;provider connection string=&quot;data source=C:\Users\Filipe\Desktop\trac.db&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> 2nd public Entities(string connectionString) : base(connectionString, "Entities") 3rd public Entities(EntityConnection connection) : base(connection, "Entities") Here is the problem, i already tried n configuration, already used EntityConnectionStringBuilder to make the connection string with no luck. Can you please point me in the right direction!? EDIT(1) How can i construct a valid connection string?!

    Read the article

  • How to use DAOs with hibernate/jpa?

    - by Ari
    Assuming the DAO structure and component interaction described below, how should DAOs be used with persistence layers like hibernate and toplink? What methods should/shouldn't they contain? Would it be bad practice to move the code from the DAO directly to the service? For example, let's say that for every model we have a DAO (that may or may not implement a base interface) that looks something like the following: public class BasicDao<T> { public List<T> list() { ... } public <T> retrieve() { ... } public void save() { ... } public void delete() { ... } } Component interaction pattern is -- service DAO model

    Read the article

  • Get script of SQL Server data

    - by Jared
    I'm looking for a way to do something analogous to the MySql dump from SQL Server. I need to be able to pick the tables and export the schema and the data (or I can export the schema via SQL Server Management Studio and export the data separately somehow). I need this data to be able to turn around and go back into SQL Server so it needs to maintain GUIDs/uniqueidentifiers and other column types. Does anyone know of a good tool for this?

    Read the article

  • Mock Repository vs. Real Repository w/Mocked Data

    - by n8wrl
    I must be doing something fundamentally wrong. I am implmenting my repositories and then testing them with mocked data. All is well. Now I want to test my domain objects so I point them at mock repositories. But I'm finding that I have to re-implement logic from the 'real' repositories into the mocks, or, create 'helper classes' that encapsulate the logic and interact with the repositories (real or mock), and then I have to test those too. So what am I missing - why implement and test mock repositories when I could use the real ones with mocked data? EDIT: To clarify, by 'mocked data' I do not hit the actual database. I have a 'DB mock layer' I can insert under the real repositories that returns known-data.

    Read the article

  • SQL Express under IIS 7.5

    - by fampinheiro
    I´m developing a web service that access a SQL Express database, it works very well in the Visual Studio host but when i deploy it to IIS 7.5 i get this exception. Please help me. Stack Trace: System.Data.EntityException: The underlying provider failed on Open. ---> System.Data.SqlClient.SqlException: Failed to generate a user instance of SQL Server due to failure in retrieving the user's local application data path. Please make sure the user has a local user profile on the computer. The connection will be closed. at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK) at System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean openCondition, DbConnection storeConnectionToOpen, DbConnection originalConnection, String exceptionCode, String attemptedOperation, Boolean& closeStoreConnectionOnFailure) --- End of inner exception stack trace --- at System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean openCondition, DbConnection storeConnectionToOpen, DbConnection originalConnection, String exceptionCode, String attemptedOperation, Boolean& closeStoreConnectionOnFailure) at System.Data.EntityClient.EntityConnection.Open() at System.Data.Objects.ObjectContext.EnsureConnection() at System.Data.Objects.ObjectQuery`1.GetResults(Nullable`1 forMergeOption) at System.Data.Objects.ObjectQuery`1.System.Collections.Generic.IEnumerable<T>.GetEnumerator() at System.Linq.Enumerable.FirstOrDefault[TSource](IEnumerable`1 source) at WSCinema.CinemaService.Movie() in D:\Documents\My Dropbox\Projects\sd.v0910\trab3\code\WSCinema\CinemaService.asmx.cs:line 46

    Read the article

  • Opening Blob Data stored in Sqlite as a File with Tcl/Tk

    - by dmullins
    I am using the following tcl code to store a file from my deskstop into a Sqlite database as blob data ($fileText is a path to a text file): sqlite3 db Docs.db set fileID [open $fileText RDONLY] fconfigure $fileID -translation binary set content [read $fileID] close $fileID db eval {insert into Document (Doc) VALUES ($content)} db close I have found many resources on how to open the blob data to read and write to it, but I cannot find any resources on openning the blob data as a file. For example, if $fileText was a pdf, how would I open it, from Sqlite, as a pdf?

    Read the article

  • Windows Azure: need to know the data processing time

    - by veda
    I have stored some files in the form of blobs on azure and I have written an application that would access these blobs. When I host this application as a web role on azure, it works perfectly and I am happy with that. But now, I wanted to know “what is the query time taken to access each blob file?” I was searching for this through the Microsoft Azure Storage SLA and I found that for GetBlob request type, the maximum processing time should be within the product of 2 seconds multiplied by the number of MBs transferred in processing the request. I am still unclear. What is the actual processing time of my data query? How can I measure it? Can I be able to speed up the processing time? I can understand that the processing time depends on internet speed, location of the data center where my data is being stored, and location of data center where my application is being hosted. But still, will I be able to speed up my query?

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >