Search Results

Search found 30858 results on 1235 pages for 'database tuning'.

Page 681/1235 | < Previous Page | 677 678 679 680 681 682 683 684 685 686 687 688  | Next Page >

  • IIS6, ASP.NET MVC 1 and random slowdowns

    - by Mr Snuffle
    I've recently deployed a MVC application to an IIS6 web server. One strange behaviour I've been having is the load times will randomly blow up to 30sec+ and then return to normal. Our tests have shown this occurring on multiple connections at the same time. Once the wait has passed, the site become responsive again. It's completely random when this will occur, but will probably happen about once every 15 minutes or so. My first thought was the application was being restarted by the web server for some reason, but I determined this wasn't the case because the process recycling is set very infrequently, and I placed some logging in the application startup. It's also nothing to do with the database connection. This slowdown happens simply by moving between static pages too. I've watched the database with a SQL profiler, and nothing is hitting it when these slowdowns occur. Finally, I've placed entry and exit logging on my controller actions, the slowdown always happens outside of the controller. The entry and exit time for a controller action is always appropriately fast. Does anyone have any ideas of what could be causing this? I've tried running it locally on IIS7 and I haven't had the issue. I can only think it's something to do with our hosting provider.

    Read the article

  • entity framework vNext wish list

    - by Fred Yang
    I have been intensively studying and use ef4 in my project. I do feel the improvement that it has over version 1. But I found that I have something I cannot get around easily. Here is a list I want it to be better in ef vNext. the model designer should allow multiple view of the same model, so that I don't need cram all my entity into a single view. respect user's manual edit of edmx. Currently, the some database view object simply can not be imported to the model because the designer "smartly" think that the view does not have a primary key, so that I have to manually edit the edmx to correct designer's behavior. But in the next "update from database" task, designer will revert my customization. For now, I simply fallback to manually edit the edmx file at all, or I have to use compare tool to keep the new update, and rollback and put the new update into my old edmx file manually. Designer should be improved to allow default behavior and user's manual control. I want control not to let the designer refresh the change of imported object. support user defined table function. linq is about Composability, stored proc dos not support composability. I wish I could use user defined table function which support this. What are you wishes for EF vNext?

    Read the article

  • EF 4 Self Tracking Entities does not work as expected.

    - by ashraf
    I am using EF4 Self Tracking Entities (VS2010 Beta 2 CTP 2 plus new T4 generator). But when I try to update entity information it does not update to database as expected. I setup 2 service calls. one for GetResource(int id) which return a resource object. the second call is SaveResource(Resource res); here is the code. public Resource GetResource(int id) { using (var dc = new MyEntities()) { return dc.Resources.Where(d => d.ResourceId == id).SingleOrDefault(); } } public void SaveResource(Resource res) { using (var dc = new MyEntities()) { dc.Resources.ApplyChanges(res); dc.SaveChanges(); // Nothing save to database. } } //Windows Console Client Calls var res = service.GetResource(1); res.Description = "New Change"; // Not updating... service.SaveResource(res); // does not change anything. It seems to me that ChangeTracker.State is always show as "Unchanged". anything wrong in this code?

    Read the article

  • Turning on multiple result sets in an ODBC connection to SQL Server

    - by David Griffiths
    I have an application that originally needed to connect to Sybase (via ODBC), but I've needed to add the ability to connect to SQL Server as well. As ODBC should be able to handle both, I thought I was in a good position. Unfort, SQL Server will not let me, by default, nest ODBC commands and ODBCDataReaders - it complains the connection is busy. I know that I had to specify that multiple active result sets (MARS) were allowed in similar circumstances when connecting to SQL Server via a native driver, so I thought it wouldn't be an issue. The DSN wizard has no entr y when creating a SystemDSN. Some people have provided registry hacks to get around this, but this did not work (add a MARS_Connection with a value of Yes to HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI\system-dsn-name). Another suggestion was to create a file-dsn, and add "MARS_Connection=YES" to that. Didn't work. Finally, a DSN-less connection string. I've tried this one (using MultipleActiveResultSets - same variable as a Sql Server connection would use), "Driver={SQL Native Client};Server=xxx.xxx.xxx.xxx;Database=someDB;Uid=u;Pwd=p;MultipleActiveResultSets=True;" and this one: "Driver={SQL Native Client};Server=192.168.75.33\\ARIA;Database=Aria;Uid=sa;Pwd=service;MARS_Connection=YES;" I have checked the various connection-string sites - they all suggest what I've already tried.

    Read the article

  • asyncore callbacks launching threads... ok to do?

    - by sbartell
    I'm unfamiliar with asyncore, and have very limited knowledge of asynchronous programming except for a few intro to twisted tutorials. I am most familiar with threads and use them in all my apps. One particular app uses a couchdb database as its interface. This involves longpolling the db looking for changes and updates. The module I use for couchdb is couchdbkit. It uses an asyncore loop to watch for these changes and send them to a callback. So, I figure from this callback is where I launch my worker threads. It seems a bit crude to mix asynchronous and threaded programming. I really like couchdbkit, but would rather not introduce issues into my program. So, my question is, is it safe to fire threads from an async callback? Here's some code... {{{ def dispatch(change): global jobs, db_url # jobs is my queue db = Database(db_url) work_order = db.get(change['id']) # change is an id to the document that changed. # i need to get the actual document (workorder) worker = Worker(work_order, db) # fire the thread jobs.append[worker] worker.start() return main() . . . consumer.wait(cb=dispatch, since=update_seq, timeout=10000) #wait constains the asyncloop. }}}

    Read the article

  • LinQ XML mapping to a generic type

    - by Manuel Navarro
    I´m trying to use an external XML file to map the output from a stored procedure into an instance of a class. The problem is that my class is of a generic type: public class MyValue<T> { public T Value { get; set; } } Searching through a lot of blogs an articles I've managed to get this: <?xml version="1.0" encoding="utf-8" ?> <Database Name="" xmlns="http://schemas.microsoft.com/linqtosql/mapping/2007"> <Table Name="MyValue" Member="MyNamespace.MyValue`1" > <Type Name="MyNamespace.MyValue`1"> <Column Name="Category" Member="Value" DbType="VarChar(100)" /> </Type> </Table> <Function Method="GetResourceCategories" Name="myprefix_GetResourceCategories" > <ElementType Name="MyNamespace.MyValue`1"/> </Function> </Database> The MyNamespace.MyValue`1 trick works fine, and the class is recognized. I expect four rows from the stored procedure, and I'm getting four MyValue<string> instances, but the big problem is that the property Value for the all four instances is null. The property is not getting mapped and I don't really get why. Maybe worth noting that the property Value is generic, and that when the mapping is done using attributes it works perfect. Anyone have a clue? BTW the method GetResourceCategories: public ISingleResult<MyValue<string>> GetResourceCategories() { IExecuteResult result = this.ExecuteMethodCall( this, (MethodInfo)MethodInfo.GetCurrentMethod()); return (ISingleResult<MyValue<string>>)result.ReturnValue; }

    Read the article

  • What is the best way to store site configuration data?

    - by DaveDev
    I have a question about storing site configuration data. We have a platform for web applications. The idea is that different clients can have their data hosted and displayed on their own site which sits on top of this platform. Each site has a configuration which determines which panels relevant to the client appear on which pages. The system was originally designed to keep all the configuration data for each site in a database. When the site is loaded all the configuration data is loaded into a SiteConfiguration object, and the clients panels are generated based on the content of this object. This works, but I find it very difficult to work with to apply change requests or add new sites because there is so much data to sift through and it's difficult maintain a mental model of the site and its configuration. Recently I've been tasked with developing a subset of some of the sites to be generated as PDF documents for printing. I decided to take a different approach to how I would define the configuration in that instead of storing configuration data in the database, I wrote XML files to contain the data. I find it much easier to work with because instead of reading meaningless rows of data which are related to other meaningless rows of data, I have meaningful documents with semantic, readable information with the relationships defined by visually understandable element nesting. So now with these 2 approaches to storing site configuration data, I'd like to get the opinions of people more experienced in dealing with this issue on dealing with these two approaches. What is the best way of storing site configuration data? Is there a better way than the two ways I outlined here? note: StackOverflow is telling me the question appears to be subjective and is likely to be closed. I'm not trying to be subjective. I'd like to know how best to approach this issue next time and if people with industry experience on this could provide some input.

    Read the article

  • Do you have health checks in your web app or web site?

    - by Pekka
    I have built PHP based "health check" scripts for several projects, but they were always custom-made for the occasion and not written for abstraction as an independent product. I would like to know whether such a solution exists. What I meam by "health check" is a protected web page that functions much like a suite of unit tests, but on a more operational level, showing red/yellow/green statuses for things like Are the cache directories writable? Is the PHP version correct, are required extensions installed? Is the configuration file protected from writing? Is the database server reachable? Do the key tables exist in the database? Is there enough disk space available? Is the site's front page reachable and renders fully ( = no PHP errors)? Do the project's libraries' MD5 checksums match the original ones? Do you do this - or parts of it - in your applications and web sites? Are there any standardized tools for this that bring along all the functionality to perform the tests (ideally as plugins), and just need to be configured accordingly? Is there a way to set this up using one of the Unit Testing frameworks available for PHP (preferably PHPUnit)? If so, do you know any resources / tutorials outlining how?

    Read the article

  • Alter Dilemma : How to use to set Primary and other attributes.

    - by Rachel
    I have following table in database AND I need to alter it to below mentioned schema. Initially I was drop the current database and creating new one using the create but I am not supposed to do that and use ALTER but am not sure as to how can I use ALTER to add primary key and other constraints. Any Suggestions !!! Code Current: CREATE TABLE `details` ( `KEY` varchar(255) NOT NULL, `ID` bigint(20) NOT NULL, `CODE` varchar(255) NOT NULL, `C_ID` bigint(20) NOT NULL, `C_CODE` varchar(64) NOT NULL, `CCODE` varchar(255) NOT NULL, `TCODE` varchar(255) NOT NULL, `LCODE` varchar(255) NOT NULL, `CAMCODE` varchar(255) NOT NULL, `OFCODE` varchar(255) NOT NULL, `OFNAME` varchar(255) NOT NULL, `PRIORITY` bigint(20) NOT NULL, `STDATE` datetime NOT NULL, `ENDATE` datetime NOT NULL, `INT` varchar(255) NOT NULL, `PHONE` varchar(255) NOT NULL, `TV` varchar(255) NOT NULL, `MTV` varchar(255) NOT NULL, `TYPE` varchar(255) NOT NULL, `CREATED` datetime NOT NULL, `MAIN` varchar(255) NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1; Desired: CREATE TABLE `details` ( `id` bigint(20) NOT NULL, `code` varchar(255) NOT NULL, `cid` bigint(20) NOT NULL, `ccode` varchar(64) NOT NULL, `c_code` varchar(255) NOT NULL, `tcode` varchar(255) NOT NULL, `lcode` varchar(255) NOT NULL, `camcode` varchar(255) NOT NULL, `ofcode` varchar(255) NOT NULL, `ofname` varchar(255) NOT NULL, `priority` bigint(20) NOT NULL, `stdate` datetime NOT NULL, `enddate` datetime NOT NULL, `list` varchar(255) NOT NULL, `name` varchar(255) NOT NULL, `created` datetime NOT NULL, `date` datetime NOT NULL, `ofshn` int(20) NOT NULL, `ofcl` int(20) NOT NULL, `ofr` int(20) NOT NULL, PRIMARY KEY (`code`,`ccode`,`list`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; Thanks !!!

    Read the article

  • Corrupt UTF-8 Characters with PHP 5.2.10 and MySQL 5.0.81

    - by jkndrkn
    We have an application hosted on both a local development server and a live site. We are experiencing UTF-8 corruption issues and are looking to figure out how to resolve them. The system is run using symfony 1.0 with Propel. On our development server, we are running PHP 5.2.0 and MySQL 5.0.32. We do not experience corrupted UTF-8 characters there. On our live site, PHP 5.2.10 and MySQL 5.0.81 is running. On that server, certain characters such as ô´ and S are corrupted once they are stored in the database. The corrupted characters are showing up as either question marks or approximations of the original character with adjacent question marks. Examples of corruption: Uncorrupted: ô´ Corrupted: ô? Uncorrupted: S Corrupted: ? We are currently using the following techniques on both development and live servers: Executing the following queries prior to execution of any other queries: SET NAMES 'utf8' COLLATE 'utf8_unicode_ci' SET CHARSET 'utf8' Setting the <meta> Content-Type value to: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Adding the following to our .htaccess file: AddDefaultCharset utf-8 Using mb_* (multibyte) PHP functions where necessary. Being sure to set database columns to use utf8_unicode_ci collation. These techniques are sufficient for our development site, but do not work on the live site. On the live site I've also tried adding mysql_set_encoding('ut8', $mysql_connection) but this does not help either. I have found some evidence that newer versions of PHP and MySQL are mishandling UTF-8 character encodings.

    Read the article

  • Problems sorting by date

    - by Nathan
    I'm attempting to only display data added to my database 24 hours ago or less. However, for some reason, the code I've written isn't working, and both entries in my database, one from 1 hour ago, one from 2 days ago, show up. Is there something wrong with my equation? Thanks! public void UpdateValues() { double TotalCost = 0; double TotalEarned = 0; double TotalProfit = 0; double TotalHST = 0; for (int i = 0; i <= Program.TransactionList.Count - 1; i++) { DateTime Today = DateTime.Now; DateTime Jan2013 = DateTime.Parse("01-01-2013"); //Hours since Jan12013 int TodayHoursSince2013 = Convert.ToInt32(Math.Round(Today.Subtract(Jan2013).TotalHours)); //7 int ItemHoursSince2013 = Program.TransactionList[i].HoursSince2013; //Equals 7176, and 7130 if (ItemHoursSince2013 - TodayHoursSince2013 <= 24) { TotalCost += Program.TransactionList[i].TotalCost; TotalEarned += Program.TransactionList[i].TotalEarned; TotalProfit += Program.TransactionList[i].TotalEarned - Program.TransactionList[i].TotalCost; TotalHST += Program.TransactionList[i].TotalHST; } } label6.Text = "$" + String.Format("{0:0.00}", TotalCost); label7.Text = "$" + String.Format("{0:0.00}", TotalEarned); label8.Text = "$" + String.Format("{0:0.00}", TotalProfit); label10.Text = "$" + String.Format("{0:0.00}", TotalHST); }

    Read the article

  • Special characters stripped by mySQL/PHP JSON

    - by Will Gill
    Hi, I have a simple PHP script to extract data from a mySQL database and encode it as JSON. The problem is that special characters (for example German ä or ß characters) are stripped from the JSON response. Everything after the first special character for any single field is just stripped. The fields are set to utf8_bin, and in phpMyAdmin the characters display correctly. The PHP script looks like this: <?php header("Content-type: application/json; charset=utf-8"); $con = mysql_connect('database', 'username', 'password'); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("sql01_5789willgil", $con); $sql="SELECT * FROM weightevent"; $result = mysql_query($sql); $row = mysql_fetch_array($result); $events = array(); while($row = mysql_fetch_array($result)) { $eventid = $row['eventid']; $userid = $row['userid']; $weight = $row['weight']; $sins = $row['sins']; $gooddeeds = $row['gooddeeds']; $date = $row['date']; $event = array("eventid"=>$eventid, "userid"=>$userid, "weight"=>$weight, "sins"=>$sins, "gooddeeds"=>$gooddeeds, "date"=>$date); array_push($events, $event); } $myJSON = json_encode($events); echo $myJSON; mysql_close($con); ?> Sample output: [{"eventid":"2","userid":"1","weight":"70.1","sins":"Weihnachtspl","gooddeeds":"situps! lots and lots of situps!","date":"2011-01-02"},{"eventid":"3","userid":"2","weight":"69.9","sins":"A second helping of pasta...","gooddeeds":"I ate lots of salad","date":"2011-01-01"}] -- in the first record the value for field 'sins' should be "Weihnachtsplätzchen". thanks very much!

    Read the article

  • SQL Server becomes slow after restart

    - by Tobi DM
    We use SQL Server 2005 on an Windwos Server 2008. Ther Server has 48 GB RAM. SQL Server is configured to use 40 GB RAM. There is only one database hosted (About 70 GB). The only app beside SQL Server is our App-Server which connects the clients to the database. Now we encounter the following problem: After a restart of the server our the performance is great. The server grabs the 40 GB RAM wich it is allowed to and then runs fast as hell. But after about 4 weeks the system becomes slower and slower. The execution of statements (seen in the profiler) is raising slowly. But I cannot see that there is something going wrong on the server. CPU usage is at about 20% I/O also seems to be no Problem The process monitor does also not show that there are strange apps or something like that. Eventlog does also have no interessting messages No open transactions or blockings to see We tried already the following things without effect: Droped the cache by using the statements DBCC FreeProcCache DBCC FREESYSTEMCACHE('ALL') DBCC DropCleanbuffers Restarted the Appserver we are using. Restart the sql server service But nothing did help exept restarting the whole server. Any ideas?

    Read the article

  • Deeply nested subqueries for traversing trees in MySQL

    - by nickf
    I have a table in my database where I store a tree structure using the hybrid Nested Set (MPTT) model (the one which has lft and rght values) and the Adjacency List model (storing parent_id on each node). my_table (id, parent_id, lft, rght, alias) This question doesn't relate to any of the MPTT aspects of the tree but I thought I'd leave it in in case anyone had a good idea about how to leverage that. I want to convert a path of aliases to a specific node. For example: "users.admins.nickf" would find the node with alias "nickf" which is a child of one with alias "admins" which is a child of "users" which is at the root. There is a unique index on (parent_id, alias). I started out by writing the function so it would split the path to its parts, then query the database one by one: SELECT `id` FROM `my_table` WHERE `parent_id` IS NULL AND `alias` = 'users';-- 1 SELECT `id` FROM `my_table` WHERE `parent_id` = 1 AND `alias` = 'admins'; -- 8 SELECT `id` FROM `my_table` WHERE `parent_id` = 8 AND `alias` = 'nickf'; -- 37 But then I realised I could do it with a single query, using a variable amount of nesting: SELECT `id` FROM `my_table` WHERE `parent_id` = ( SELECT `id` FROM `my_table` WHERE `parent_id` = ( SELECT `id` FROM `my_table` WHERE `parent_id` IS NULL AND `alias` = 'users' ) AND `alias` = 'admins' ) AND `alias` = 'nickf'; Since the number of sub-queries is dependent on the number of steps in the path, am I going to run into issues with having too many subqueries? (If there even is such a thing) Are there any better/smarter ways to perform this query?

    Read the article

  • SQL 2 INNER JOINS with 3 tables

    - by Jelmer Holtes
    I've a question about a SQL query.. I'm building a prototype webshop in ASP.NET Visual Studio. Now I'm looking for a solution to view my products. I've build a database in MS Access, it consists of multiple tables. The tables which are important for my question are: Product Productfoto Foto Below you'll see the relations between the tables For me it is important to get three datatypes: Product title, price and image. The product title, and the price are in the Product table. The images are in the Foto table. Because a product can have more than one picture, there is a N - M relation between them. So I've to split it up, I did it in the Productfoto table. So the connection between them is: product.artikelnummer -> productfoto.artikelnummer productfoto.foto_id -> foto.foto_id Then I can read the filename (in the database: foto.bestandnaam) I've created the first inner join, and tested it in Access, this works: SELECT titel, prijs, foto_id FROM Product INNER JOIN Productfoto ON product.artikelnummer = productfoto.artikelnummer But I need another INNER JOIN, how could I create that? I guess something like this (this one will give me an error) SELECT titel, prijs, bestandnaam FROM Product (( INNER JOIN Productfoto ON product.artikelnummer = productfoto.artikkelnummer ) INNER JOIN foto ON productfoto.foto_id = foto.foto_id) Can anyone help me?

    Read the article

  • Counting entries in a list of dictionaries: for loop vs. list comprehension with map(itemgetter)

    - by Dennis Williamson
    In a Python program I'm writing I've compared using a for loop and increment variables versus list comprehension with map(itemgetter) and len() when counting entries in dictionaries which are in a list. It takes the same time using a each method. Am I doing something wrong or is there a better approach? Here is a greatly simplified and shortened data structure: list = [ {'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'biscuits and gravy'}, {'key1': False, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'peaches and cream'}, {'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': False, 'filenotfound': 'Abbott and Costello'}, {'key1': False, 'dontcare': False, 'ignoreme': True, 'key2': False, 'filenotfound': 'over and under'}, {'key1': True, 'dontcare': True, 'ignoreme': False, 'key2': True, 'filenotfound': 'Scotch and... well... neat, thanks'} ] Here is the for loop version: #!/usr/bin/env python # Python 2.6 # count the entries where key1 is True # keep a separate count for the subset that also have key2 True key1 = key2 = 0 for dictionary in list: if dictionary["key1"]: key1 += 1 if dictionary["key2"]: key2 += 1 print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2) Output for the data above: Counts: key1: 3, subset key2: 2 Here is the other, perhaps more Pythonic, version: #!/usr/bin/env python # Python 2.6 # count the entries where key1 is True # keep a separate count for the subset that also have key2 True from operator import itemgetter KEY1 = 0 KEY2 = 1 getentries = itemgetter("key1", "key2") entries = map(getentries, list) key1 = len([x for x in entries if x[KEY1]]) key2 = len([x for x in entries if x[KEY1] and x[KEY2]]) print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2) Output for the data above (same as before): Counts: key1: 3, subset key2: 2 I'm a tiny bit surprised these take the same amount of time. I wonder if there's something faster. I'm sure I'm overlooking something simple. One alternative I've considered is loading the data into a database and doing SQL queries, but the data doesn't need to persist and I'd have to profile the overhead of the data transfer, etc., and a database may not always be available. I have no control over the original form of the data. The code above is not going for style points.

    Read the article

  • Moving front page (cursebird or foursquare)

    - by Dan Samuels
    Ok so I'll be honest, I have a good amount of experience with php/mysql, I've just started learning jQuery and I've done very little, but some with ajax. So using the terms ajax/jquery interchangeably are a bit confusing to me. Anyway as the title suggest I have a website with 5 items, and I want them to move (meaning, if a more recent one is entered, remove the last item, and put the new one on top), they are 5 of the most recent items in the database table, now I've coded jquery as a test so it fades out the last one, the whole thing moves down, makes room at the top, and fades in a new one. However, it's a test and has 0 interaction with the database, the one that fades in is just in a hidden div. So the jQuery part is taken care of. So I'm unsure how to go about this, I was thinking maybe have ajax check a website off the page that has those 5 items in raw format, and if they change then to refresh? Not looking for a "plz code 4 me" answer, just the concept of how it would work, or some links to get off to the right start. edit - Also, the 5 items are ranked, so if I click item 3 I need it to move above item 2 refreshlessly, so this causes a whole other issue I assume.

    Read the article

  • How do I view the full content of a text or varchar(MAX) column in SQL Server 2008 Management Studio

    - by adamjford
    In this live SQL Server 2008 (build 10.0.1600) database, there's an Events table, which contains a text column named Details. (Yes, I realize this should actually be a varchar(MAX) column, but whoever set this database up did not do it that way.) This column contains very large logs of exceptions and associated JSON data that I'm trying to access through SQL Server Management Studio, but whenever I copy the results from the grid to a text editor, it truncates it at 43679 characters. I've read on various locations on the Internet that you can set your Maximum Characters Retrieved for XML Data in Tools > Options > Query Results > SQL Server > Results To Grid to Unlimited, and then perform a query such as this: select Convert(xml, Details) from Events where EventID = 13920 (Note that the data is column is not XML at all. CONVERTing the column to XML is merely a workaround I found from Googling that someone else has used to get around the limit SSMS has from retrieving data from a text or varchar(MAX) column.) However, after setting the option above, running the query, and clicking on the link in the result, I still get the following error: Unable to show XML. The following error happened: Unexpected end of file has occurred. Line 5, position 220160. One solution is to increase the number of characters retrieved from the server for XML data. To change this setting, on the Tools menu, click Options. So, any idea on how to access this data? Would converting the column to varchar(MAX) fix my woes?

    Read the article

  • asp code for upload data

    - by vicky
    hello everyone i have this code for uploading an excel file and save the data into database.I m not able to write the code for database entry. someone please help <% if (Request("FileName") <> "") Then Dim objUpload, lngLoop Response.Write(server.MapPath(".")) If Request.TotalBytes > 0 Then Set objUpload = New vbsUpload For lngLoop = 0 to objUpload.Files.Count - 1 'If accessing this page annonymously, 'the internet guest account must have 'write permission to the path below. objUpload.Files.Item(lngLoop).Save "D:\PrismUpdated\prism_latest\Prism\uploadxl\" Response.Write "File Uploaded" Next Dim FSYSObj, folderObj, process_folder process_folder = server.MapPath(".") & "\uploadxl" set FSYSObj = server.CreateObject("Scripting.FileSystemObject") set folderObj = FSYSObj.GetFolder(process_folder) set filCollection = folderObj.Files Dim SQLStr SQLStr = "INSERT ALL INTO TABLENAME " for each file in filCollection file_name = file.name path = folderObj & "\" & file_name Set objExcel_chk = CreateObject("Excel.Application") Set ws1 = objExcel_chk.Workbooks.Open(path).Sheets(1) row_cnt = 1 'for row_cnt = 6 to 7 ' if ws1.Cells(row_cnt,col_cnt).Value <> "" then ' col = col_cnt ' end if 'next While (ws1.Cells(row_cnt, 1).Value <> "") for col_cnt = 1 to 10 SQLStr = SQLStr & "VALUES('" & ws1.Cells(row_cnt, 1).Value & "')" next row_cnt = row_cnt + 1 WEnd 'objExcel_chk.Quit objExcel_chk.Workbooks.Close() set ws1 = nothing objExcel_chk.Quit Response.Write(SQLStr) 'set filobj = FSYSObj.GetFile (sub_fol_path & "\" & file_name) 'filobj.Delete next End if End If plz tell me how to save the following excel data to the oracle databse.any help would be appreciated

    Read the article

  • Multi threading in WCF RIA Services

    - by synergetic
    I use WCF RIA Services to update customer database. In domain service: public void UpdateCustomer(Customer customer) { this.ObjectContext.Customers.AttachAsModified(customer); syncCustomer(customer); } After update, a database trigger launches and depending on the columns updated it may insert a new record in CustomerChange table. syncCustomer(customer) method is executed to check for a new record in the CustomerChange table and if found it will create a text file which contains customer information and forwards that file to external system for import. Now this synchronization may take a time so I wanted to execute it in different thread. So: private void syncCustomer(Customer customer) { this.ObjectContext.SaveChanges(); new Thread(() => syncCustomerInfo(customer.CustomerID)) { IsBackground = true }.Start(); } private void syncCustomerInfo(int customerID) { //Thread.Sleep(2000); //does real job here ... ... } The problem is in most cases syncCustomerInfo method cannot find any new CustomerChange record even if it was definitely there. If I force thread sleep then it finds a new record. I also looked Entity Framework events but the only event provided by object context is SavingChanges which occur before changes are saved. Please suggest me what else to try.

    Read the article

  • Best way to dynamically get column names from oracle tables

    - by MNC
    Hi, We are using an extractor application that will export data from the database to csv files. Based on some condition variable it extracts data from different tables, and for some conditions we have to use UNION ALL as the data has to be extracted from more than one table. So to satisfy the UNION ALL condition we are using nulls to match the number of columns. Right now all the queries in the system are pre-built based on the condition variable. The problem is whenever there is change in the table projection (i.e new column added, existing column modified, column dropped) we have to manually change the code in the application. Can you please give some suggestions how to extract the column names dynamically so that any changes in the table structure do not require change in the code? My concern is the condition that decides which table to query. The variable condition is like if the condition is A, then load from TableX if the condition is B then load from TableA and TableY. We must know from which table we need to get data. Once we know the table it is straightforward to query the column names from the data dictionary. But there is one more condition, which is that some columns need to be excluded, and these columns are different for each table. I am trying to solve the problem only for dynamically generating the list columns. But my manager told me to make solution on the conceptual level rather than just fixing. This is a very big system with providers and consumers constantly loading and consuming data. So he wanted solution that can be general. So what is the best way for storing condition, tablename, excluded columns? One way is storing in database. Are there any other ways? If yes what is the best? As I have to give at least a couple of ideas before finalizing. Thanks,

    Read the article

  • Web service performing asynchronous call

    - by kornelijepetak
    I have a webservice method FetchNumber() that fetches a number from a database and then returns it to the caller. But just before it returns the number to the caller, it needs to send this number to another service so instantiates and runs the BackgroundWorker whose job is to send this number to another service. public class FetchingNumberService : System.Web.Services.WebService { [WebMethod] public int FetchNumber() { int value = Database.GetNumber(); AsyncUpdateNumber async = new AsyncUpdateNumber(value); return value; } } public class AsyncUpdateNumber { public AsyncUpdateNumber(int number) { sendingNumber = number; worker = new BackgroundWorker(); worker.DoWork += asynchronousCall; worker.RunWorkerAsync(); } private void asynchronousCall(object sender, DoWorkEventArgs e) { // Sending a number to a service (which is Synchronous) here } private int sendingNumber; private BackgroundWorker worker; } I don't want to block the web service (FetchNumber()) while sending this number to another service, because it can take a long time and the caller does not care about sending the number to another service. Caller expects this to return as soon as possible. FetchNumber() makes the background worker and runs it, then finishes (while worker is running in the background thread). I don't need any progress report or return value from the background worker. It's more of a fire-and-forget concept. My question is this. Since the web service object is instantiated per method call, what happens when the called method (FetchNumber() in this case) is finished, while the background worker it instatiated and ran is still running? What happens to the background thread? When does GC collect the service object? Does this prevent the background thread from executing correctly to the end? Are there any other side-effects on the background thread? Thanks for any input.

    Read the article

  • Requirements for connecting to Oracle with JDBC?

    - by Lord Torgamus
    I'm a newbie to Java-related web development, and I can't seem to get a simple program with JDBC working. I'm using off-the-shelf Oracle 10g XE and the Eclipse EE IDE. From the books and web pages I've checked so far, I've narrowed the problem down to either an incorrectly written database URL or a missing JAR file. I'm getting the following error: java.sql.SQLException: No suitable driver found for jdbc:oracle://127.0.0.1:8080 with the following code: import java.sql.*; public class DatabaseTestOne { public static void main(String[] args) { String url = "jdbc:oracle://127.0.0.1:8080"; String username = "HR"; String password = "samplepass"; String sql = "SELECT EMPLOYEE_ID FROM EMPLOYEES WHERE LAST_NAME='King'"; Connection connection; try { connection = DriverManager.getConnection(url, username, password); Statement statement = connection.createStatement(); System.out.println(statement.execute(sql)); connection.close(); } catch (SQLException e) { System.err.println(e); } } } What is the proper format for a database URL, anyways? They're mentioned a lot but I haven't been able to find a description. Thanks! EDIT (the answer): Based on duffymo's answer, I got ojdbc14.jar from http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/htdocs/jdbc_10201.html and dropped it in the Eclipse project's Referenced Libraries. Then I changed the start of the code to ... try { Class.forName("oracle.jdbc.driver.OracleDriver"); } catch (ClassNotFoundException e) { System.err.println(e); } // jdbc:oracle:thin:@<hostname>:<port>:<sid> String url = "jdbc:oracle:thin:@GalacticAC:1521:xe"; ... and it worked.

    Read the article

  • SQL Server INSERT, Scope_Identity() and physical writing to disc

    - by TheBlueSky
    Hello everyone, I have a stored procedure that does, among other stuff, some inserts in different table inside a loop. See the example below for clearer understanding: INSERT INTO T1 VALUES ('something') SET @MyID = Scope_Identity() ... some stuff go here INSERT INTO T2 VALUES (@MyID, 'something else') ... The rest of the procedure These two tables (T1 and T2) have an IDENTITY(1, 1) column in each one of them, let's call them ID1 and ID2; however, after running the procedure in our production database (very busy database) and having more than 6250 records in each table, I have noticed one incident where ID1 does not match ID2! Although normally for each record inserted in T1, there is record inserted in T2 and the identity column in both is incremented consistently. The "wrong" records were something like that: ID1 Col1 ---- --------- 4709 data-4709 4710 data-4710 ID2 ID1 Col1 ---- ---- --------- 4709 4710 data-4709 4710 4709 data-4710 Note the "inverted", ID1 in the second table. Knowing not that much about SQL Server underneath operations, I have put the following "theory", maybe someone can correct me on this. What I think is that because the loop is faster than physically writing to the table, and/or maybe some other thing delayed the writing process, the records were buffered. When it comes the time to write them, they were wrote in no particular order. Is that even possible if no, how to explain the above mentioned scenario? If yes, then I have another question to rise. What if the first insert (from the code above) got delayed? Doesn't that mean I won't get the correct IDENTITY to insert into the second table? If the answer of this is also yes, what can I do to insure the insertion in the two tables will happen in sequence with the correct IDENTITY? I appreciate any comment and information that help me understand this. Thanks in advance.

    Read the article

  • SQL Query to return maximums over decades

    - by Abraham Lincoln
    My question is the following. I have a baseball database, and in that baseball database there is a master table which lists every player that has ever played. There is also a batting table, which tracks every players' batting statistics. I created a view to join those two together; hence the masterplusbatting table. CREATE TABLE `Master` ( `lahmanID` int(9) NOT NULL auto_increment, `playerID` varchar(10) NOT NULL default '', `nameFirst` varchar(50) default NULL, `nameLast` varchar(50) NOT NULL default '', PRIMARY KEY (`lahmanID`), KEY `playerID` (`playerID`), ) ENGINE=MyISAM AUTO_INCREMENT=18968 DEFAULT CHARSET=latin1; CREATE TABLE `Batting` ( `playerID` varchar(9) NOT NULL default '', `yearID` smallint(4) unsigned NOT NULL default '0', `teamID` char(3) NOT NULL default '', `lgID` char(2) NOT NULL default '', `HR` smallint(3) unsigned default NULL, PRIMARY KEY (`playerID`,`yearID`,`stint`), KEY `playerID` (`playerID`), KEY `team` (`teamID`,`yearID`,`lgID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; Anyway, my first query involved finding the most home runs hit every year since baseball began, including ties. The query to do that is the following.... select f.yearID, f.nameFirst, f.nameLast, f.HR from ( select yearID, max(HR) as HOMERS from masterplusbatting group by yearID )as x inner join masterplusbatting as f on f.yearID = x.yearId and f.HR = x.HOMERS This worked great. However, I now want to find the highest HR hitter in each decade since baseball began. Here is what I tried. select f.yearID, truncate(f.yearid/10,0) as decade,f.nameFirst, f.nameLast, f.HR from ( select yearID, max(HR) as HOMERS from masterplusbatting group by yearID )as x inner join masterplusbatting as f on f.yearID = x.yearId and f.HR = x.HOMERS group by decade You can see that I truncated the yearID in order to get 187, 188, 189 etc instead of 1897, 1885,. I then grouped by the decade, thinking that it would give me the highest per decade, but it is not returning the correct values. For example, it's giving me Adrian Beltre with 48 HR's in 2004 but everyone knows that Barry Bonds hit 73 HR in 2001. Can anyone give me some pointers?

    Read the article

< Previous Page | 677 678 679 680 681 682 683 684 685 686 687 688  | Next Page >