Search Results

Search found 31293 results on 1252 pages for 'database agnostic'.

Page 690/1252 | < Previous Page | 686 687 688 689 690 691 692 693 694 695 696 697  | Next Page >

  • Collecting high-volume video viewing data

    - by DanK
    I want to add tracking to our Flash-based media player so that we can provide analytics that show what sections of videos are being watched (at the moment, we just register a view when a video starts playing) For example, if a viewer watches the first 30 seconds of a video and then clicks away to something else, we want the data to reflect that. Likewise, if someone watches the first 10 seconds, then scrubs the timeline to the last minute of the video and watches that, we want to register viewing on the parts watched and not the middle section. My first thought was to collect up the viewing data in the player and send it all to the server at the end of a viewing session. Unfortunately, Flash does not seem to have an event that you can hook into when a viewer clicks away from the page the movie is on (probably a good thing - it would be open to abuse) So, it looks like we're going to have to make regular requests to the server as the video is playing. This is obviously going to lead to a high volume of requests when there are large numbers of simultaneous viewers. The simple approach of dumping all these 'heartbeat' events from clients to a database feels like it will quickly become unmanageable so I'm wondering whether I should be taking an approach where viewing sessions are cached in memory and flushed to database when they become inactive (based on a timeout). That way, the data could be stored as time spans rather than individual heartbeats. So, to the question - what is the best way to approach dealing with this kind of high-volume viewing data? Are there any good existing architectures/patterns? Thanks, Dan.

    Read the article

  • How to disable mod_security2 rule (false positive) for one domain on centos 5

    - by nicholas.alipaz
    Hi I have mod_security enabled on a centos5 server and one of the rules is keeping a user from posting some text on a form. The text is legitimate but it has the words 'create' and an html <table> tag later in it so it is causing a false positive. The error I am receiving is below: [Sun Apr 25 20:36:53 2010] [error] [client 76.171.171.xxx] ModSecurity: Access denied with code 500 (phase 2). Pattern match "((alter|create|drop)[[:space:]]+(column|database|procedure|table)|delete[[:space:]]+from|update.+set.+=)" at ARGS:body. [file "/usr/local/apache/conf/modsec2.user.conf"] [line "352"] [id "300015"] [rev "1"] [msg "Generic SQL injection protection"] [severity "CRITICAL"] [hostname "www.mysite.com"] [uri "/node/181/edit"] [unique_id "@TaVDEWnlusAABQv9@oAAAAD"] and here is /usr/local/apache/conf/modsec2.user.conf (line 352) #Generic SQL sigs SecRule ARGS "((alter|create|drop)[[:space:]]+(column|database|procedure|table)|delete[[:space:]]+from|update.+set.+=)" "id:1,rev:1,severity:2,msg:'Generic SQL injection protection'" The questions I have are: What should I do to "whitelist" or allow this rule to get through? What file do I create and where? How should I alter this rule? Can I set it to only be allowed for the one domain, since it is the only one having the issue on this dedicated server or is there a better way to exclude table tags perhaps? Thanks guys

    Read the article

  • Is their a definitive list for the differences between the current version of SQL Azure and SQL Serv

    - by Aim Kai
    I am a relative newbie when it comes to SQL Azure!! I was wondering if there was a definitive list somewhere regarding what is and is not supported by SQL Azure in regards to SQL Server 2008? I have had a look through google but I've noticed some of the blog posts are missing things which I have found through my own testing: For example, quite a lot is summarised in this blog entry http://www.keepitsimpleandfast.com/2009/12/main-differences-between-sql-azure-and.html Common Language Runtime (CLR) Database file placement Database mirroring Distributed queries Distributed transactions Filegroup management Global temporary tables Spatial data and indexes SQL Server configuration options SQL Server Service Broker System tables Trace Flags which is a repeat of the MSDN page http://msdn.microsoft.com/en-us/library/ff394115.aspx I've noticed from my own testing that the following seem to have issues when migrating from SQL Server 2008 to the Azure: XML Types (the msdn does mention large custom types - I guess it may include this?? even if the data schema is really small?) Multi-part views I've been using SQL Azure Migration Wizard v3.1.8 to migrate local databases into the cloud. I was wondering if anyone could point to a list or give me any information till when these features are likely to be included in SQL Azure.

    Read the article

  • JDBC/OSGi and how to dynamically load drivers without explicitly stating dependencies in the bundle?

    - by Chris
    Hi, This is a biggie. I have a well-structured yet monolithic code base that has a primitive modular architecture (all modules implement interfaces yet share the same classpath). I realize the folly of this approach and the problems it represents when I go to deploy on application servers that may have different conflicting versions of my library. I'm dependent on around 30 jars right now and am mid-way though bnding them up. Now some of my modules are easy to declare the versioned dependencies of, such as my networking components. They statically reference classes within the JRE and other BNDded libraries but my JDBC related components instantiate via Class.forName(...) and can use one of any number of drivers. I am breaking everything up into OSGi bundles by service area. My core classes/interfaces. Reporting related components. Database access related components (via JDBC). etc.... I wish for my code to be able to still be used without OSGi via single jar file with all my dependencies and without OSGi at all (via JARJAR) and also to be modular via the OSGi meta-data and granular bundles with dependency information. How do I configure my bundle and my code so that it can dynamically utilize any driver on the classpath and/or within the OSGi container environment (Felix/Equinox/etc.)? Is there a run-time method to detect if I am running in an OSGi container that is compatible across containers (Felix/Equinox/etc.) ? Do I need to use a different class loading mechanism if I am in a OSGi container? Am I required to import OSGi classes into my project to be able to load an at-bundle-time-unknown JDBC driver via my database module? I also have a second method of obtaining a driver (via JNDI, which is only really applicable when running in an app server), do I need to change my JNDI access code for OSGi-aware app servers?

    Read the article

  • Programatically create web application & sub site

    - by Tejas
    I am using the following code to programatically create a web application & sub site, try { SPWebApplicationBuilder webAppBuilder = new SPWebApplicationBuilder(SPFarm.Local); SPWebApplication newApplication; int myPort = 3333; webAppBuilder.Port = myPort; //Port No. webAppBuilder.RootDirectory = new System.IO.DirectoryInfo("C:\\Inetpub\\wwwroot\\wss\\VirtualDirectories\\" + myPort); webAppBuilder.CreateNewDatabase = true; // Create new database webAppBuilder.DatabaseName = "WSS_Content_372772890a5a4a04abb460cd9ea3bd22"; // Database name webAppBuilder.DatabaseServer = webAppBuilder.DefaultZoneUri.Host; // DatabaseServer Host name/computer name webAppBuilder.UseNTLMExclusively = true; // Use NTLM authentication newApplication = webAppBuilder.Create(); // Create new web application newApplication.Provision(); // Provision it into web farm using (var mySiteCollection = newApplication.Sites.Add("/", "Site 3", "Site 3 Decscription", 1033, "STS#1", "In-Wai-Svr2\\Administrator", "Administrator", "[email protected]")) { using (var rootWeb = mySiteCollection.RootWeb) { // Code customizing root site goes here using (var subSite = rootWeb.Webs.Add("Site3.1", "Site 3.1", "Site 3.1 Description", 1033, "STS#1", true, false)) { // Code customizing sub site goes here } } } } catch(Exception ex) { ; } It works fine i.e. it creates the web application on the specified port with sub site but this sub site will not be shown under Sites menu of Quick launch bar also it will not be shown as a separate tab on the Home Page. Is it possible to do it programatically?

    Read the article

  • Search book by title, and author

    - by Swoosh
    I got a table with columns: author firstname, author lastname, and booktitle Multiple users are inserting in the database, through an import, and I'd like to avoid duplicates. So I'm trying to do something like this: I have a record in the db: First Name: "Isaac" Last Name: "Assimov" Title: "I, Robot" If the user tries to add it again, it would be basically a non-split-text (would not be split up into author firstname, author lastname, and booktitle) So it would basically look like this: "Isaac Asimov - I Robot" or "Asimov, Isaac - I Robot" or "I Robot by Isaac Asimov" You see where I am getting at? (I cannot force the user to split up all the books into into author firstname, author lastname, and booktitle, and I don't even like the idea to force the user, because it's not too user-friendly) What is the best way (in SQL) to compare all this possible bookdata scenarios to what I have in the database, not to add the same book twice. I was thinking about a possibility of suggesting the user: "is THIS the book you are trying to add?" (imagine a list instead of the THIS word, just like on stackoverflow - ask question - Related Questions. I was thinking about soundex and maybe even the like operators, but so far i didn't get the results i was hoping.

    Read the article

  • Http Digest Authentication, Handle different browser char-sets...

    - by user160561
    Hi all, I tried to use the Http Authentication Digest Scheme with my php (apache module) based website. In general it works fine, but when it comes to verification of the username / hash against my user database i run into a problem. Of course i do not want to store the user´s password in my database, so i tend to store the A1 hashvalue (which is md5($username . ':' . $realm . ':' . $password)) in my db. This is just how the browser does it too to create the hashes to send back. The Problem: I am not able to detect if the browser does this in ISO-8859-1 fallback (like firefox, IE) or UTF-8 (Opera) or whatever. I have chosen to do the calculation in UTF-8 and store this md5 hash. Which leads to non-authentication in Firefox and IE browsers. How do you solve this problem? Just do not use this auth-scheme? Or Store a md5 Hash for each charset? Force users to Opera? (Terms of A1 refer to the http://php.net/manual/en/features.http-auth.php example.) (for digest access authentication read the according wikipedia entry)

    Read the article

  • Using Rails, problem testing has_many relationship

    - by east
    The summary is that I've code that works when manually testing, but isn't doing what I would think it should when trying to build an automated test. Here are the details: I've two models: Payment and PaymentTranscation. class Payment ... has_many :transactions, :class_name => 'PaymentTransaction' class PaymentTranscation ... belongs_to payment The PaymentTransaction is only created in a Payment model method, like so: def pay_up ... transactions.create!(params...) ... end I've manually tested this code, inspected the database, and everything works well. The failing automated test looks like this: def test_pay_up purchase = Payment.new(...) assert purchase.save assert_equal purchase.state, :initialized.to_s assert purchase.pay_up # this should create a new PaymentTransaction... assert_equal purchase.state, :succeeded.to_s assert_equal purchase.transactions.count, 1 # FAILS HERE; transactions is an empty array end If I step through the code, it's clear that the PaymentTransaction is getting created correctly (though I can't see it in the database because everything is in a testing transaction). What I can't figure out is why transactions is returning an empty array in the test when I know a valid PaymentTransaction is getting created. Anybody have some suggestions? Thanks in advance, east

    Read the article

  • Multi-level shop, xml or sql. best practice?

    - by danrichardson
    Hello, i have a general "best practice" question regarding building a multi-level shop, which i hope doesn't get marked down/deleted as i personally think it's quite a good "subjective" question. I am a developer in charge (in most part) of maintaining and evolving a cms system and associated front-end functionality. Over the past half year i have developed a multiple level shop system so that an infinite level of categories may exist down into a product level and all works fine. However over the last week or so i have questioned by own methods in front-end development and the best way to show the multi-level data structure. I currently use a sql server database (2000) and pull out all the shop levels and then process them into an enumerable typed list with child enumerable typed lists, so that all levels are sorted. This in my head seems quite process heavy, but we're not talking about thousands of rows, generally only 1-500 rows maybe. I have been toying with the idea recently of storing the structure in an XML document (as well as the database) and then sending last modified headers when serving/requesting the document for, which would then be processed as/when nessecary with an xsl(t) document - which would be processed server side. This is quite a handy, reusable method of storing the data but does it have more overheads in the fact im opening and closing files? And also the xml will require a bit of processing to pull out blocks of xml if for instance i wanted to show two level mid way through the tree for a side menu. I use the above method for sitemap purposes so there is currently already code i have built which does what i require, but unsure what the best process is to go about. Maybe a hybrid method which pulls out the data, sorts it and then makes an xml document/stream (XDocument/XmlDocument) for xsl processing is a good way? - This is the way i currently make the cms work for the shop. So really (and thanks for sticking with me on this), i am just wandering which methods other people use or recommend as being the best/most logical way of doing things. Thanks Dan

    Read the article

  • SQL Server 2005, wide indexes, computed columns, and sargable queries

    - by luksan
    In my database, assume we have a table defined as follows: CREATE TABLE [Chemical]( [ChemicalId] int NOT NULL IDENTITY(1,1) PRIMARY KEY, [Name] nvarchar(max) NOT NULL, [Description] nvarchar(max) NULL ) The value for Name can be very large, so we must use nvarchar(max). Unfortunately, we want to create an index on this column, but nvarchar(max) is not supported inside an index. So we create the following computed column and associated index based upon it: ALTER TABLE [Chemical] ADD [Name_Indexable] AS LEFT([Name], 20) CREATE INDEX [IX_Name] ON [Chemical]([Name_Indexable]) INCLUDE([Name]) The index will not be unique but we can enforce uniqueness via a trigger. If we perform the following query, the execution plan results in a index scan, which is not what we want: SELECT [ChemicalId], [Name], [Description] FROM [Chemical] WHERE [Name]='[1,1''-Bicyclohexyl]-2-carboxylic acid, 4'',5-dihydroxy-2'',3-dimethyl-5'',6-bis[(1-oxo-2-propen-1-yl)oxy]-, methyl ester' However, if we modify the query to make it "sargable," then the execution plan results in an index seek, which is what we want: SELECT [ChemicalId], [Name], [Description] FROM [Chemical] WHERE [Indexable_Name]='[1,1''-Bicyclohexyl]-' AND [Name]='[1,1''-Bicyclohexyl]-2-carboxylic acid, 4'',5-dihydroxy-2'',3-dimethyl-5'',6-bis[(1-oxo-2-propen-1-yl)oxy]-, methyl ester' Is this a good solution if we control the format of all queries executed against the database via our middle tier? Is there a better way? Is this a major kludge? Should we be using full-text indexing?

    Read the article

  • Saving multiple objects in a single call in rails

    - by CaptnCraig
    I have a method in rails that is doing something like this: a = Foo.new("bar") a.save b = Foo.new("baz") b.save ... x = Foo.new("123", :parent_id => a.id) x.save ... z = Foo.new("zxy", :parent_id => b.id) z.save The problem is this takes longer and longer the more entities I add. I suspect this is because it has to hit the database for every record. Since they are nested, I know I can't save the children before the parents are saved, but I would like to save all of the parents at once, and then all of the children. It would be nice to do something like: a = Foo.new("bar") b = Foo.new("baz") ... saveall(a,b,...) x = Foo.new("123", :parent_id => a.id) ... z = Foo.new("zxy", :parent_id => b.id) saveall(x,...,z) That would do it all in only two database hits. Is there an easy way to do this in rails, or am I stuck doing it one at a time?

    Read the article

  • How can I write System preferences with Java? Can I invoke UAC?

    - by Jonas
    How can I write system preferences with Java, using Preferences.systemRoot()? I tried with: Preferences preferences = Preferences.systemRoot(); preferences.put("/myapplication/databasepath", pathToDatabase); But I got this error message: 2010-maj-29 19:02:50 java.util.prefs.WindowsPreferences openKey VARNING: Could not open windows registry node Software\JavaSoft\Prefs at root 0x80000002. Windows RegOpenKey(...) returned error code 5. Exception in thread "AWT-EventQueue-0" java.lang.SecurityException: Could not open windows registry node Software\JavaSoft\Prefs at root 0x80000002: Access denied at java.util.prefs.WindowsPreferences.openKey(Unknown Source) at java.util.prefs.WindowsPreferences.openKey(Unknown Source) at java.util.prefs.WindowsPreferences.openKey(Unknown Source) at java.util.prefs.WindowsPreferences.putSpi(Unknown Source) at java.util.prefs.AbstractPreferences.put(Unknown Source) at biz.accountia.pos.install.Setup$2.actionPerformed(Setup.java:43) I would like to do this, because I want to install an embedded JavaDB database, and let multiple users on the computer to use the same database with the application. How to solve this? Can I invoke UAC and do this as Administrator from Java? And if I log in as Administrator when writing, can I read the values with my Java application if I'm logged in as a User?

    Read the article

  • Setting nested object to null when selected option has empty value

    - by Javi
    Hello, I have a Class which models a User and another which models his country. Something like this: public class User{ private Country country; //other attributes and getter/setters } public class Country{ private Integer id; private String name; //other attributes and getter/setters } I have a Spring form where I have a combobox so the user can select his country or can select the undefined option to indicate he doen't want to provide this information. So I have something like this: <form:select path="country"> <form:option value="">-Select one-</form:option> <form:options items="${countries}" itemLabel="name" itemValue="id"/> </form:select> In my controller I get the autopopulated object with the user information and I want to have country set to null when the "-Select one-" option has been selected. So I have set a initBinder with a custom editor like this: @InitBinder protected void initBinder(WebDataBinder binder) throws ServletException { binder.registerCustomEditor(Country.class, "country", new CustomCountryEditor()); } and my editor do something like this: public class CustomCountryEditor(){ @Override public String getAsText() { //I return the Id of the country } @Override public void setAsText(String str) { //I search in the database for a country with id = new Integer(str) //and set country to that value //or I set country to null in case str == null } } When I submit the form it works because when I have country set to null when I have selected "-Select one-" option or the instance of the country selected. The problem is that when I load the form I have a method like the following one to load the user information. @ModelAttribute("user") public User getUser(){ //loads user from database } The object I get from getUser() has country set to a specific country (not a null value), but in the combobox is not selected any option. I've debugged the application and the CustomCountryEditor works good when setting and getting the text, thoughgetAsText method is called for every item in the list "countries" not only for the "country" field. Any idea? Is there a better way to set null the country object when I select no country option in the combobox? Thanks

    Read the article

  • How can I do rapid application development with ASP.NET MVC?

    - by Erik Forbes
    I've been given a short amount of time (~80 hours to start with) to replace an existing Access database with a full-blown SQL + Web system, and I'm enumerating my options. I would like to use ASP.NET MVC, but I'm unsure of how to use it effectively with my short timetable. For the database backend I'll be using Linq to SQL as it's a product I already know and can get something working with it quickly. Does anyone have any experience with using ASP.NET MVC in this way and can share some insight? Edit: The reason I've been interested in ASP.NET MVC is because I know (100% confirmed) that there will be more work to do after this first round, and I'd like my maintenance work to be as easy as possible. In my experience Webforms applications tend to break down over repeated maintenance, despite discipline. Maybe there's a middle ground? How difficult would it to be for me to, say, build the app with Webforms, then migrate it to MVC later when I have more time budgeted to the project? Edit 2: Further background: the Access application I'm replacing is used in some capacity by everyone in the building, and since it was upgraded from Access 98 to 2003 it's been crashing daily, causing hours of lost productivity as people have to re-enter data since the last backup. This is the reason for the short amount of time - this is a critical business function, and they can't afford to keep re-entering data on a daily basis.

    Read the article

  • What do these .NET auto-generated table adapter commands do? e.g. UPDATE/INSERT followed by a SELECT

    - by RickL
    I'm working with a legacy application which I'm trying to change so that it can work with SQL CE, whilst it was originally written against SQL Server. The problem I am getting now is that when I try to do dataAdapter.Update, SQL CE complains that it is not expecting the SELECT keyword in the command text. I believe this is because SQL CE does not support batch SELECT statements. The auto-generated table adapter command looks like this... this._adapter.InsertCommand.CommandText = @"INSERT INTO [Table] ([Field1], [Field2]) VALUES (@Value1, @Value2); SELECT Field1, Field2 FROM Table WHERE (Field1 = @Value1)"; What is it doing? It looks like it is inserting new records from the datatable into the database, and then reading that record back from the database into the datatable? What's the point of that? Can I just go through the code and remove all these SELECT statements? Or is there an easier way to solve my problem of wanting to use these data adapters with SQL CE? I cannot regenerate these table adapters, as the people who knew how to have long since left.

    Read the article

  • phppgadmin : How does it kick users out of postgres, so it can db_drop?

    - by egarcia
    I've got one Posgresql database (I'm the owner) and I'd like to drop it and re-create it from a dump. Problem is, there're a couple applications (two websites, rails and perl) that access the db regularly. So I get a "database is being accessed by other users" error. I've read that one possibility is getting the pids of the processes involved and killing them individually. I'd like to do something cleaner, if possible. Phppgadmin seems to do what I want: I am able to drop schemas using its web interface, even when the websites are on, without getting errors. So I'm investigating how its code works. However, I'm no PHP expert. I'm trying to understand the phppgadmin code in order to see how it does it. I found out a line (257 in Schemas.php) where it says: $data->dropSchema(...) $data is a global variable and I could not find where it is defined. Any pointers would be greatly appreciated.

    Read the article

  • How to port an Ajax CMS based on metadata in Asp.Net MVC?

    - by Maushu
    I'm maintaining a CMS where I have this feeling it was made in the age of dinosaurs (Asp.net 1.0?) and decided to upgrade it with Asp.Net MVC and jQuery. But I have some problems regarding the design/specifications of the CMS which I cannot change. The CMS The CMS uses JavaScript. Alot. As in "I don't load pages, I request new pages using Ajax and render the information using javascript" alot. Not to mention the animations, the weird horizontal apresentation of structures... anyways, besides the first page (that is the login page) every other "page" is just data requested from a WebService that comes with the website. Would MVC have any problems with this design? The Database The database is in a SQL Server 2k8 and, like the CMS, this part is also... interesting. Basically, the user can create data structures using metadata (and saved on the Structure table). These structures are saved on tables that are created (and regenerated when changed) at runtime using said metadata. I don't know how I would implement this part in MVC. The question is, can and should I convert this project to MVC? Any tips regarding the metadata and overuse of ajax?

    Read the article

  • Android - Lifecycle and saving an Instance State questions

    - by The Salt
    So within my application is a form for creating a new user, with relevant details and information about the user. There's no problems there, it's just what happens when the user leaves the activity without pressing the confirm button. Here's what I want to do: If the user presses the back button, attempt to save all the data to the database and inform the user. If the activity is interrupted (ie by a phone call), save all the data into a temporary location so when the activity is at the top of the stack again, nothing appears to have changed (but the data still hasn't yet been saved to the database). If the activity gets killed for more resources when in the background, do the same as point 2 above (ie when the activity is started again, it appears that nothing has changed). If the whole application is started again (by clicking on the icon again) and there is temporary data stored from either points 2 or 3 above, navigate to the "create user" activity and display data as if nothings changed. Here's how I'm currently trying to do it: Use onDestroy() and isFinishing() functions to find when the activity is being killed, to cover point 1 above (to then try and save all data). Save all data with onSaveInstanceState into a bundle (to cover point 2 above) Does the bundle created with onSaveInstanceState survive the activity being killed for more resources, so when its recreated the previous state can be retrieved (as in point 3 above)? No idea how to implement point 4. Any help would be massively appreciated. Cheers!

    Read the article

  • Images from SQL Server JPG/PNG Image Column not being Type Converted to Bitmap in HttpHandlers (cons

    - by kanchirk
    Our Silverlight 3.0 Client consumes Images stored/retrieved on the File System thorough ASP.NET HttpHandlers successfully. We are trying to store and read back Images using a SQL Server 2008 Database. Please find the stripped down code pasted below with the Exception. "Bitmap is not Valid" //Store document to the database private void SaveImageToDatabaseKK(HttpContext context, string pImageFileName) { try { //ADO.NET Entity Framework ImageTable documentDB = new ImageTable(); int intLength = Convert.ToInt32(context.Request.InputStream.Length); //Move the file contents into the Byte array Byte[] arrContent = new Byte[intLength]; context.Request.InputStream.Read(arrContent, 0, intLength); //Insert record into the Document table documentDB.InsertDocument(pImageFileName, arrContent, intLength); } catch { } } =The method to Read Back the Row from the Table and Send it back is below.= private void RetrieveImageFromDatabaseTableKK(HttpContext context, string pImageName) { try { ImageTable documentDB = new ImageTable(); var docRow = documentDB.GetDocument(pImageName); //based on Imagename which is unique //DocData column in table is **Image** if (docRow!=null && docRow.DocData != null && docRow.DocData.Length > 0) { Byte[] bytImage = docRow.DocData; if (bytImage != null && bytImage.Length > 0) { Bitmap newBmp = ConvertToBitmap(context, bytImage ); if (newBmp != null) { newBmp.Save(context.Response.OutputStream, ImageFormat.Jpeg); newBmp.Dispose(); } } } } catch (Exception exRI) { } } // Convert byte array to Bitmap (byte[] to Bitmap) protected Bitmap ConvertToBitmap(byte[] bmp) { if (bmp != null) { try { TypeConverter tc = TypeDescriptor.GetConverter(typeof(Bitmap)); Bitmap b = (Bitmap)tc.ConvertFrom(bmp); **//This is where the Exception Occurs.** return b; } catch (Exception) { } } return null; }

    Read the article

  • C#+BDE+DBF problem

    - by Drabuna
    I have huge problem: I have lots of .dbf files(~50000) and I need to import them into Oracle database. I open conncection like this: OleDbConnection oConn = new OleDbConnection(); OleDbCommand oCmd = new OleDbCommand(); oConn.ConnectionString = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" + directory + ";Extended Properties=dBASE IV;User ID=Admin;Password="; oCmd.Connection = oConn; oCmd.CommandText = @"SELECT * FROM " + tablename; try { oConn.Open(); resultTable.Load(oCmd.ExecuteReader()); } catch (Exception ex) { MessageBox.Show(ex.Message); } oConn.Close(); oCmd.Dispose(); oConn.Dispose(); I read them in loop, and then insert into oracle. Everything's fine. BUT: There is about 1000 files, that I can't open. They raise exception "not a table". So I google, and install Borland Database Engine. Now everything wokrs fine....but no. Now, when I'm reading files, on 1024 file exception raises: "System resource exceeded". But I have lots of free resources. When I remove BDE, everything's fine again, no "system resource exceeded" error, but I cant read all files. Help please. PS: Tried using ODBC but nothing changes.

    Read the article

  • Refining data stored in SQLite - how to join several contacts?

    - by Krab
    Problem background Imagine this problem. You have a water molecule which is in contact with other molecules (if the contact is a hydrogen bond, there can be 4 other molecules around my water). Like in the following picture (A, B, C, D are some other atoms and dots mean the contact). A B . . O / \ H H . . C D I have the information about all the dots and I need to eliminate the water in the center and create records describing contacts of A-C, A-D, A-B, B-C, B-D, and C-D. Database structure Currently, I have the following structure in the database: Table atoms: "id" integer PRIMARY KEY, "amino" char(3) NOT NULL, (HOH for water or other value) other columns identifying the atom Table contacts: "acceptor_id" integer NOT NULL, (the atom near to my hydrogen, here C or D) "donor_id" integer NOT NULL, (here A or B) "directness" char(1) NOT NULL, (this should be D for direct and W for water-mediated) other columns about the contact, such as the distance Current solution (insufficient) Now, I'm going through all the contacts which have donor.amino = "HOH". In this sample case, this would select contacts from C and D. For each of these selected contacts, I look up contacts having the same acceptor_id as is the donor_id in the currently selected contact. From this information, I create the new contact. At the end, I delete all contacts to or from HOH. This way, I am obviously unable to create C-D and A-B contacts (the other 4 are OK). If I try a similar approach - trying to find two contacts having the same donor_id, I end up with duplicate contacts (C-D and D-C). Is there a simple way to retrieve all six contacts without duplicates? I'm dreaming about some one page long SQL query which retrievs just these six wanted rows. :-) It is preferable to conserve information about who is donor where possible, but not strictly necessary. Big thanks to all of you who read this question to this point.

    Read the article

  • Problems by inserting values from textboxes

    - by simon
    I'm trying to insert the current date to the database and i allways get the message(when i press the button on the form to save to my access database), that the data type is incorect in the conditional expression. the code: string conString = "Provider=Microsoft.Jet.OLEDB.4.0;" + "Data Source=C:\\Users\\Simon\\Desktop\\save.mdb"; OleDbConnection empConnection = new OleDbConnection(conString); string insertStatement = "INSERT INTO obroki_save " + "([ID_uporabnika],[ID_zivila],[skupaj_kalorij]) " + "VALUES (@ID_uporabnika,@ID_zivila,@skupaj_kalorij)"; OleDbCommand insertCommand = new OleDbCommand(insertStatement, empConnection); insertCommand.Parameters.Add("@ID_uporabnika", OleDbType.Char).Value = users.iDTextBox.Text; insertCommand.Parameters.Add("@ID_zivila", OleDbType.Char).Value = iDTextBox.Text; insertCommand.Parameters.Add("@skupaj_kalorij", OleDbType.Char).Value = textBox1.Text; empConnection.Open(); try { int count = insertCommand.ExecuteNonQuery(); } catch (OleDbException ex) { MessageBox.Show(ex.Message); } finally { empConnection.Close(); textBox1.Clear(); textBox2.Clear(); textBox3.Clear(); textBox4.Clear(); textBox5.Clear(); } I have now cut out the date,( i made access paste the date ), still there is the same problem. Is the first line ok? users.idtextbox.text? Please help !

    Read the article

  • How do I pass a lot of parameters to views in Dango?

    - by Mark
    I'm very new to Django and I'm trying to build an application to present my data in tables and charts. Till now my learning process went very smooth, but now I'm a bit stuck. My pageview retrieves large amounts of data from a database and puts it in the context. The template then generates different html-tables. So far so good. Now I want to add different charts to the template. I manage to do this by defining <img src=".../> tags. The Matplotlib chart is generate in my chartview an returned via: response=HttpResponse(content_type='image/png') canvas.print_png(response) return response Now I have different questions: the data is retrieved twice from the database. Once in the pageview to render the tables, and again in the chartview for making the charts. What is the best way to pass the data, already in the context of the page to the chartview? I need a lot of charts, each with different datasets. I could make a chartview for each chart, but probably there is a better way. How do I pass the different dataset names to the chartview? Some charts have 20 datasets, so I don't think that passing these dataset parameters via the url (like: <imgm src="chart/dataset1/dataset2/.../dataset20/chart.png />) is the right way. Any advice?

    Read the article

  • Slow retrieval of data in SQLITE takes a long using ContentProvider

    - by Arlyn
    I have an application in Android (running 4.0.3) that stores a lot of data in Table A. Table A resides in SQLite Database. I am using a ContentProvider as an abstraction layer above the database. Lots of data here means almost 80,000 records per month. Table A is structured like this: String SQL_CREATE_TABLE = "CREATE TABLE IF NOT EXISTS " + TABLE_A + " ( " + COLUMN_ID + " INTEGER PRIMARY KEY NOT NULL" + "," + COLUMN_GROUPNO + " INTEGER NOT NULL DEFAULT(0)" + "," + COLUMN_TIMESTAMP + " DATETIME UNIQUE NOT NULL" + "," + COLUMN_TAG + " TEXT" + "," + COLUMN_VALUE + " REAL NOT NULL" + "," + COLUMN_DEVICEID + " TEXT NOT NULL" + "," + COLUMN_NEW + " NUMERIC NOT NULL DEFAULT(1)" + " )"; Here is the index statement: String SQL_CREATE_INDEX_TIMESTAMP = "CREATE INDEX IF NOT EXISTS " + TABLE_A + "_" + COLUMN_TIMESTAMP + " ON " + TABLE_A + " (" + COLUMN_TIMESTAMP + ") "; I have defined the columns as well as the table name as String Constants. I am already experiencing significant slow down when retrieving this data from Table A. The problem is that when I retrieve data from this table, I first put it in an ArrayList and then I display it. Obviously, this is possibly the wrong way of doing things. I am trying to find a better way to approach this problem using a ContentProvider. But this is not the problem that bothers me. The problem is for some reason, it takes a lot longer to retrieve data from other tables which have only upto 12 records maximum. I see this delay increase as the number of records in Table A increase. This does not make any sense. I can understand the delay if I retrieve data from Table A, but why the delay in retrieving data from other tables. To clarify, I do not experience this delay if Table A is empty or has less than 3000 records. What could be the problem?

    Read the article

  • How to compile ocaml to native code

    - by Indra Ginanjar
    i'm really interested learning ocaml, it fast (they said it could be compiled to native code) and it's functional. So i tried to code something easy like enabling mysql event scheduler. #load "unix.cma";; #directory "+mysql";; #load "mysql.cma";; let db = Mysql.quick_connect ~user:"username" ~password:"userpassword" ~database:"databasename"();; let sql = Printf.sprintf "SET GLOBAL EVENT_SCHEDULER=1;" in (Mysql.exec db sql);; It work fine on ocaml interpreter, but when i was trying to compile it to native (i'm using ubuntu karmic), neither of these command worked ocamlopt -o mysqleventon mysqleventon.ml unix.cmxa mysql.cmxa ocamlopt -o mysqleventon mysqleventon.ml unix.cma mysql.cma i also tried ocamlc -c mysqleventon.ml unix.cma mysql.cma all of them resulting same message File "mysqleventon.ml", line 1, characters 0-1: Error: Syntax error Then i tried to remove the "# load", so the code goes like this let db = Mysql.quick_connect ~user:"username" ~password:"userpassword" ~database:"databasename"();; let sql = Printf.sprintf "SET GLOBAL EVENT_SCHEDULER=1;" in (Mysql.exec db sql);; The ocamlopt resulting message File "mysqleventon.ml", line 1, characters 9-28: Error: Unbound value Mysql.quick_connect I hope someone could tell me, where did i'm doing wrong.

    Read the article

< Previous Page | 686 687 688 689 690 691 692 693 694 695 696 697  | Next Page >