Search Results

Search found 11421 results on 457 pages for 'zend db'.

Page 413/457 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • How to write automated tests for SQL queries?

    - by James
    The current system we are adopting at work is to write some extremely complex queries which perform multiple calculations and have multiple joins / sub-queries. I don't think I am experienced enough to say if this is correct or not so I am agreeing and attempting to function with this system as it has clear benefits. The problem we are having at the moment is that the person writing the queries makes a lot of mistakes and assumes everything is correct. We have now assigned a tester to analyse all of the queries but this still proves extremely time consuming and stressful. I would like to know how we could create an automated procedure (without specifically writing it with code if possible as I can work out how to do that the long way) to verify a set of 10+ different inputs, verify the output data and say if the calculations are correct. I know I could write a script using specific data in the database and create a script using c# (the db is SQL Server) and verify all the values coming out but I would like to know what the official "standard" is as my experience is lacking in this area and I would like to improve. I am happy to add more information if required, add a comment if necessary. Thank you. Edit: I am using c#

    Read the article

  • protect form hijacking hack

    - by Karem
    Yes hello today I discovered a hack for my site. When you write a msg on a users wall (in my communitysite) it runs a ajax call, to insert the msg to the db and will then on success slide down and show it. Works fine with no problem. So I was rethinking alittle, I am using POST methods for this and if it was GET method you could easily do ?msg=haxmsg&usr=12345679. But what could you do to come around the POST method? I made a new html document, made a form and on action i set "site.com/insertwall.php" (the file that normally are being used in ajax), i made some input fields with names exactly like i am doing with the ajaxcall (msg, uID (userid), BuID (by userid) ) and made a submit button. I know I have a page_protect() function on which requires you to login and if you arent you will be header to index.php. So i logged in (started session on my site.com) and then I pressed on this submit button. And then wops I saw on my site that it has made a new message. I was like wow, was it so easy to hijack POST method i thought maybe it was little more secure or something. I would like to know what could I do to prevent this hijacking? As i wouldnt even want to know what real hackers could do with this "hole". The page_protect secures that the sessions are from the same http user agent and so, and this works fine (tried to run the form without logging in, and it just headers me to startpage) but yea wouldnt take long time to figure out to log in first and then run it. Any advices are appreciated alot. I would like to keep my ajax calls most secure as possible and all of them are running on the POST method. What could I do to the insertwall.php, to check that it comes from the server or something.. Thank you

    Read the article

  • how can i set up a uniqueness constraint in mysql for columns that can be null?

    - by user299689
    I know that in MySQL, UNIQUE constraits don't treat NULL values as equal. So if I have a unique constraint on ColumnX, then two separate rows can have values of NULL for ColumnX and this wouldn't violate the constraint. How can I work around this? I can't just set the value to an arbitrary constant that I can flag, because ColumnX in my case is actually a foreign key to another table. What are my options here? Please note that this table also has an "id" column that is its primary key. Since I'm using Ruby on Rails, it's important to keep this id column as the primary key. Note 2: In reality, my unique key encompasses many columns, and some of them have to be null, because they are foreign keys, and only one of them should be non-null. What I'm actually trying to do is to "simulate" a polymorphic relationship in a way that keep referential integrity in the db, but using the technique outlined in the first part of the question asked here: http://stackoverflow.com/questions/922184/why-can-you-not-have-a-foreign-key-in-a-polymorphic-association

    Read the article

  • Database class is not correctly connecting to my database.

    - by blerh
    I'm just venturing into the world of OOP so forgive me if this is a n00bish question. This is what I have on index.php: $dbObj = new Database(); $rsObj = new RS($dbObj); This is the Database class: class Database { private $dbHost; private $dbUser; private $dbPasswd; private $dbName; private $sqlCount; function __construct() { $this->dbHost = 'localhost'; $this->dbUser = 'root'; $this->dbPasswd = ''; $this->dbName = 'whatever'; $this->sqlCount = 0; } function connect() { $this->link = mysql_connect($this->db_host, $this->db_user, $this->db_passwd); if(!$this->link) $this->error(mysql_error()); $this->selection = mysql_select_db($this->db_name, $this->link); if(!$this->selection) $this->error(mysql_error()); } } I've shortened it to just the connect() method to simplify things. This is the RS class: class RS { private $username; private $password; function __construct($dbObj) { // We need to get an account from the db $dbObj->connect(); } } As you can probably see, I need to access and use the database class in my RS class. But I get this error when I load the page: Warning: mysql_connect() [function.mysql-connect]: Access denied for user 'ODBC'@'localhost' (using password: NO) in C:\xampp\htdocs\includes\database.class.php on line 22 The thing is I have NO idea where it got the idea that it needs to use ODBC as a user... I've read up on doing this stuff and from what I can gather I am doing it correctly. Could anyone lend me a hand? Thank you.

    Read the article

  • Ajax Request using jQuery in Rails

    - by Steve
    Hi... I am sending an Ajax Request using jQuery. What happens is that I am getting an "405 Method Not Allowed" Error. I am just posting a form, which would get the detail from the form and insert it into the DB. Just the usual stuff.I am using WEBrick that comes as default with the rails package. Can somebody please tell me how to fix this. This is the code that triggers the Ajax Request $.post($(this).attr("action") + ".js",$(this).serialize(),null,"script"); Response Headers Cache-Control no-cache Allow GET, PUT, DELETE Content-Type text/html; charset=utf-8 Content-Length 9502 Server WEBrick/1.3.1 (Ruby/1.9.1/2009-12-07) Date Wed, 02 Jun 2010 20:41:33 GMT Connection Keep-Alive Request Headers Host localhost:3000 User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept application/json, text/javascript, */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Content-Type application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With XMLHttpRequest Referer http://localhost:3000/viewspot/3 Content-Length 141 Pragma no-cache Cache-Control no-cache

    Read the article

  • MySQL - What is wrong with this query or my database? Terrible performance.

    - by Moss
    SELECT * from `employees` a LEFT JOIN (SELECT phone1 p1, count(*) c, FROM `employees` GROUP BY phone1) b ON a.phone1 = b.p1; I'm not sure if it is this query in particular that has the problem. I have been getting terrible performance in general with this database. The table in question has 120,000 rows. I have tried this particular query remotely and locally with the MyISAM and InnoDB engines, with different types of joins, and with and without an index on phone1. I can get this to complete in about 4 minutes on a 10,000 row table successfully but performance drops exponentially with larger tables. Remotely it will lose connection to the server and locally it brings my system to its knees and seems to go on forever. This query is only a smaller step I was trying to do when a larger query couldn't complete. Maybe I should explain the whole scenario. I have one big flat ugly table that lists a bunch of people and their contact info and the info of the companies they work for. I'm trying to normalize the database and intelligently determine which phone numbers apply to individual people and which apply to an office location. My reasoning is that if a phone number occurs multiple times and the number of occurrence equals the number of times that the street address it is attached to occurs then it must be an office number. So the first step is to count each phone number grouping by phone number. Normally if you just use COUNT()...GROUP BY it will only list the first record it finds in that group so I figured I have to join the full table to the count table where the phone number matches. This does work but as I said I can't successfully complete it on any table much larger than 10,000 rows. This seems pathetic and this doesn't seem like a crazy query to do. Is there a better way to achieve what I want or do I have to break my large table into 12 pieces or is there something wrong with the table or db?

    Read the article

  • Best practice how to store HTML in a database column

    - by tbrandao
    I have an application that modifies a table dynamically, think spreadsheet), then upon saving the form (which the table is part of) ,I store that changed table (with user modifications) in a database column named html_Spreadhseet,along with the rest of the form data. right now I'm just storing the html in a plain text format with basic escaping of characters... I'm aware that this could be stored as a separate file, the source table (html_workseeet) already is. But from a data handling perspective its easier to save the changed html table to and from a column so as to avoid having to come up with a file management strategy (which folder will this live in, now must include folder in backups, security issues now need to apply to files, how to sync db security with file system etc.), so to minimize these issues I'm only storing the ... part in the database column. My question is should I gzip the HTML , maybe use JSON, or some other format to easily store and retrieve the HTML from the database column, what is the best practice to store HTML content in a datbase? Or just store it as I currently am as an escaped text column?

    Read the article

  • Is there a reason why SSIS significantly slows down after a few minutes?

    - by Mark
    I'm running a fairly substantial SSIS package against SQL 2008 - and I'm getting the same results both in my dev environment (Win7-x64 + SQL-x64-Developer) and the production environment (Server 2008 x64 + SQL Std x64). The symptom is that initial data loading screams at between 50K - 500K records per second, but after a few minutes the speed drops off dramatically and eventually crawls embarrasingly slowly. The database is in Simple recovery model, the target tables are empty, and all of the prerequisites for minimally logged bulk inserts are being met. The data flow is a simple load from a RAW input file to a schema-matched table (i.e. no complex transforms of data, no sorting, no lookups, no SCDs, etc.) The problem has the following qualities and resiliences: Problem persists no matter what the target table is. RAM usage is lowish (45%) - there's plenty of spare RAM available for SSIS buffers or SQL Server to use. Perfmon shows buffers are not spooling, disk response times are normal, disk availability is high. CPU usage is low (hovers around 25% shared between sqlserver.exe and DtsDebugHost.exe) Disk activity primarily on TempDB.mdf, but I/O is very low (< 600 Kb/s) OLE DB destination and SQL Server Destination both exhibit this problem. To sum it up, I expect either disk, CPU or RAM to be exhausted before the package slows down, but instead its as if the SSIS package is taking an afternoon nap. SQL server remains responsive to other queries, and I can't find any performance counters or logged events that betray the cause of the problem. I'll gratefully reward any reasonable answers / suggestions.

    Read the article

  • using spring, hibernate and scala, is there a better way to load test data than dbunit?

    - by egervari
    Here are some things I really dislike about dbunit: 1) You cannot specify the exact ordering the inserts because dbunit likes to group your inserts by table name, and not by the order you define them in the XML file. This is a problem when you have records depending on other records in other tables, so you have to disable foreign key constraints during your tests... which actually sucks because these foreign key constraints will get fired in production while your tests won't be aware of them! 2) They seem hellbent on forcing you to use an xml namespace to define your xml... and I honestly can't be bothered to do this. I like the data.xml without any namespace. It works. But they are so hellbent on deprecating it. 3) Creating different xml files is hard on a per test basis, so it actually encourages creating data for your entire app. Unfortunately, this process is a little bloated too once the data grows in size and things get inter tangled. There has got to be a better way to split up your test data into chunks without having to copy/paste a lot of the test data across all of your tests. 4) Keeping track of id references in a big xml file is just impossible. If you have 130 domain classes, it just gets bewildering. This model simply does not scale. Is there something less bloated and better in the Spring/Hibernate space? db unit has worn out its welcome and I'm really looking for something better.

    Read the article

  • MACRO compilation PROBLEM

    - by wildfly
    i was given a primitive task to find out (and to put in cl) how many nums in an array are bigger than the following ones, (meaning if (arr[i] arr[i+1]) count++;) but i've problems as it has to be a macro. i am getting errors from TASM. can someone give me a pointer? SortA macro a, l LOCAL noes irp reg, <si,di,bx> push reg endm xor bx,bx xor si,si rept l-1 ;;also tried rept 3 : wont' compile mov bl,a[si] inc si cmp bl,arr[si] jb noes inc di noes: add di,0 endm mov cx,di irp reg2, <bx,di,si> pop reg2 endm endm dseg segment arr db 10,9,8,7 len = 4 dseg ends sseg segment stack dw 100 dup (?) sseg ends cseg segment assume ds:dseg, ss:sseg, cs:cseg start: mov ax, dseg mov ds,ax sortA arr,len cseg ends end start errors: Assembling file: sorta.asm **Error** sorta.asm(51) REPT(4) Expecting pointer type **Error** sorta.asm(51) REPT(6) Symbol already different kind: NOES **Error** sorta.asm(51) REPT(10) Expecting pointer type **Error** sorta.asm(51) REPT(12) Symbol already different kind: NOES **Error** sorta.asm(51) REPT(16) Expecting pointer type **Error** sorta.asm(51) REPT(18) Symbol already different kind: NOES Error messages: 6

    Read the article

  • Surely a foolish error, but I can't see it

    - by dnagirl
    I have a form (greatly simplified): <form action='http://example.com/' method='post' name='event_form'> <input type='text' name='foo' value='3'/> <input type='submit' name='event_submit' value='Edit'/> </form> And I have a class "EventForm" to process the form. The main method, process() looks like this: public function process($submitname=false){ $success=false; if ($submitname && isset($_POST[$submitname])){ //PROBLEM: isset($_POST[$submitname] is always FALSE for some reason if($success=$this->ruler->validate()){//check that dependancies are honoured and data types are correct if($success=$this->map_fields()){//divide input data into Event, Schedule_Element and Temporal_Expression(s) $success=$this->eventholder->save(); } } } else {//get the record from the db based on event_id or schedule_element_id foreach($this->gets as $var){//list of acceptable $_GET keys if(isset($_GET[$var])){ if($success= $this->__set($var,$_GET[$var])) break; } } } $this->action=(empty($this->eventholder->event_id))? ADD : EDIT; return $success; } When the form is submitted, this happens: $form->process('event_submit'); For some reason though, isset($_POST['event_submit']) always evaluates to FALSE. Can anyone see why?

    Read the article

  • Formatting data for printing automatically

    - by 0bytes
    I have a requirement to retrieve data, format it to mimic an old request format we've used for years and then send it via IP address to any of a number of printers. Gathering the data and selecting the printer is no problem. I need to format the output for the printer and I'm just not sure what's best. The requirement is that the end users not have any interaction with the print request that's generated; only the intended recipients of the request job will know of the printed (& in a future release e-mailed) request. We also a requirement for a future update that we will have to incorporate into this solution an option to change the config in the DB so that each recipient site can choose printing or e-mail notification so I expect that I'll need to keep the control in a console app or webservice. I have the data and I can send it to the printers or generate an HTML-formatted e-mail easily; I just need to format it for the autoprinting. I don't think SSRS will help because I don't know of a way to designate a printer when running a report. Thanks for all help & suggestions.

    Read the article

  • non-cached RSLs in Flex?

    - by SteMa
    I have a project that is for several customers, the only difference is in the DB, everything else looks the same, except for the main page's text. That is loaded from an external swf file. I created a library, compiled it as an swc, imported it and using it as an RSL. The problem is that if once I've opened the page, and afterwards update the rsl (because changes in the text are needed), than it's already cached by the browser (not the flashplayer's cache but we shouldn't discuss this please!) and the updated swf won't be loaded. If I use it as an external, the page won't even start up (the browser says it's loaded, but it's blank, not even the loading progess bar of flex appear) <local:MainPage includeIn="default" currentState="{MainPageState}" id="Page" width="100%" height="100%" /> this is the code on the main page, if I comment this out, than the whole thing loads, even with the use of the "external" link-type. If it helps, in the design view, I see the component, but I get a warning for the library: Design mode could not load MainPage.swc. It may be incompatible with this SDK, or invalid. (DesignAssetLoader.CompleteTimeout)

    Read the article

  • C# & SQL Server Authentication

    - by Peter
    Hello, I'm currently developing a C# app with an SQL Server DB back-end. I'm approaching the point of deployment and hitting a problem. The applicaiton will be deployed within an active directory network. As far as SQL authentication goes, I understand that I have 2 options - Windows Authenticaiton or Server Authenticaiton. If I use Server Authentication, I'm concerned that the username and password for the account will be stored in plain text in the app.config file, and therefore leave the database vulnerable. Using Windows Authenticaiton will avoid this issue, however it would mean giving every member of staff within our organisation read/write access to the database in order to run the app correctly. Whilst this is ok, it also means that they can easily connect to the database themselves via other means and directly alter the data outside of the app. I'm guessing there is someting really obvious I'm missing here, but I've been googling all evening to no avail. Any advice/guidance would be much appreciated! Peter Addition - my project is Windows Form based not ASP.NET - is encrypting the app.config file still the right answer? If it is, does anyone have any examples that are not ASP.NET based?

    Read the article

  • passing string to a AJAX/JSON function

    - by Mikey1980
    I’m having trouble getting a AJAX/JSON function to work correctly. I had this function grabbing value from a drop down box but now I want to use an anchor tag to set it's value. I thought it would be easy to just use the onClick event to pass string to the function I was using for the drop down box but it doesn’t do anything. I’m stumped! Here how I set it up: 1st I add an onClick event… <a href="<?php echo Settings::get('app.webroot'); ?>?view=schedule&action=questions" onmouseout="MM_swapImgRestore();" onmouseover="MM_swapImage('bre','','template/images/schedule/bre_f2.gif',1) onclick="assignCallType('testing')";> 2nd I check main.js.php function assignCallType(type) { alert(type); //just for debugging new Request.JSON({ url: "ajax.php", onSuccess: function(rtndata,txt){ if (rtndata['STATUS'] != 'OK') alert('Error assigning call type to call'); }, onFailure: function (xhr) { alert('Error assigning call type to call'); } }).get({ 'action': 'assignCallType', 'call_type': type }); } 3rd Ajax.php: the variable is back in PHP and values don’t get added to the db, but I also didn’t get the alert from main.js.php if ($_GET['action'] == "assignCallType") { if ($USER->isInsideSales()) { $call_type = $_GET['call_type']; $_SESSION['callinfo']->setCallType($call_type); $_SESSION['callinfo']->save($callid); echo json_encode(array('STATUS'=>'OK')); } else { echo json_encode(array('STATUS'=>'DENIED')); } } Any idea where I am going wrong. The only difference between this and the working drop down is how the function was called, I used onchange="assignCallType(this.value)".

    Read the article

  • (Rails) Creating multi-dimensional hashes/arrays from a data set...?

    - by humble_coder
    Hi All, I'm having a bit of an issue wrapping my head around something. I'm currently using a hacked version of Gruff in order to accommodate "Scatter Plots". That said, the data is entered in the form of: g.data("Person1",[12,32,34,55,23],[323,43,23,43,22]) ...where the first item is the ENTITY, the second item is X-COORDs, and the third item is Y-COORDs. I currently have a recordset of items from a table with the columns: POINT, VALUE, TIMESTAMP. Due to the "complex" calculations involved I must grab everything using a single query or risk way too much DB activity. That said, I have a list of items for which I need to dynamically collect all data from the recordset into a hash (or array of arrays) for the creation of the data items. I was thinking something like the following: @h={} e = Events.find_by_sql(my_query) e.each do |event| @h["#{event.Point}"][x] = event.timestamp @h["#{event.Point}"][y] = event.value end Obviously that's not the correct syntax, but that's where my brain is going. Could someone clean this up for me or suggest a more appropriate mechanism by which to accomplish this? Basically the main goal is to keep data for each pointname grouped (but remember the recordset has them all). Much appreciated. EDIT 1 g = Gruff::Scatter.new("600x350") g.title = self.name e = Event.find_by_sql(@sql) h ={} e.each do |event| h[event.Point.to_s] ||= {} h[event.Point.to_s].merge!({event.Timestamp.to_i,event.Value}) end h.each do |p| logger.info p[1].values.inspect g.data(p[0],p[1].keys,p[1].values) end g.write(@chart_file)

    Read the article

  • I need to convert the result of a stored procedure in a dbml file to IQueryable to view a list in an

    - by RJ
    I have a MVC project that has a Linq to SQL dbml class. It is a table called Clients that houses client information. I can easily get the information to display in a View using the code I followed in Nerd Dinner but I have added a stored procedure to the dbml and it's result set is of IQueryable, not IQueryable. I need to convert IQueryable to IQueryable so I can display it in the same View. The reason for the sproc is so I can pass a search string tothe sproc and return the same information as a full list but filtered on the search. I know I can use Linq to filter the whole list but I don't want the whole list so I am using the sproc. Here is the code in my ClientRepository with a comment where I need to convert. What code goes in the commented spot. public IQueryable<Client> SelectClientsBySearch(String search) { IQueryable<SelectClientsBySearchResult> spClientList = (from p in db.SelectClientsBySearch(search) select p).AsQueryable(); //what is the code to convert IQueryable<SelectClientsBySearchResult> to IQueryable<Client> return clientList; }

    Read the article

  • SQL Server job (stored proc) trace

    - by Jit
    Hi Friends, I need your suggestion on tracing the issue. We are running data load jobs at early morning and loading the data from Excel file into SQL Server 2005 db. When job runs on production server, many times it takes 2 to 3 hours to complete the tasks. We could drill down to one job step which is taking 99% of the total time to finish. While running the job step (stored procs) on staging environment (with the same production database restored) takes 9 to 10 minutes, the same takes hours on production server when it run at early morning as part of job. The production server always stuck up at the very job step. I would like to run trace on the very job step (around 10 stored procs run for each user in while loop within the job step) and collect the info to figure out the issue. What are the ways available in SQL Server 2005 to achieve the same? I want to run the trace only for these SPs and not for certain period time period on production server, as trace give lots of information and it becomes very difficult for me (as not being DBA) to analyze that much of trace information and figure out the issue. So I want to collect info about specific SPs only. Let me know what you suggest. Appreciate your time and help. Thanks.

    Read the article

  • ASP.Net MVC - how can I easily serialize query results to a database?

    - by Mortanis
    I've been working on a little property search engine while I learn ASP.Net MVC. I've gotten the results from various property database tables and sorted them into a master generic property response. The search form is passed via Model Binding and works great. Now, I'd like to add pagination. I'm returning the chunk of properties for the current page with .Skip() and .Take(), and that's working great. I have a SearchResults model that has the paged result set and various other data like nextPage and prevPage. Except, I no longer have the original form of course to pass to /Results/2. Previously I'd have just hidden a copy of the form and done a POST each time, but it seems inelegant. I'd like to serialize the results to my MS SQL database and return a unique key for that results set - this also helps with a "Send this query to a friend!" link. Killing two birds with one stone. Is there an easy way to take an IQueryable result set that I have, serialize it, stick it into the DB, return a unique key and then reverse the process with said key? I'm using Linq to SQL currently on a MS SQL Express install, though in production it'll be on MS SQL 2008.

    Read the article

  • what's the performance difference between int and varchar for primary keys

    - by user568576
    I need to create a primary key scheme for a system that will need peer to peer replication. So I'm planning to combine a unique system ID and a sequential number in some way to come up with unique ID's. I want to make sure I'll never run out of ID's, so I'm thinking about using a varchar field, since I could always add another character if I start running out. But I've read that integers are better optimized for this. So I have some questions... 1) Are integers really better optimized? And if they are, how much of a performance difference is there between varchars and integers? I'm going to use firebird for now. But I may switch later. Or possibly support multiple db's. So I'm looking for generalizations, if that's possible. 2) If integers are significantly better optimized, why is that? And is it likely that varchars will catch up in the future, so eventually it won't matter anyway? My varchar keys won't have any meaning, except for the unique system ID part. But I may want to obscure that somehow. Also, I plan to efficiently use all the bits of each character. I don't, for example, plan to code the integer 123 as the character string "123". So I don't think varchars will require more space than integers.

    Read the article

  • How Best to Replace PL/SQL with C#?

    - by Mike
    Hi, I write a lot of one-off Oracle SQL queries/reports (in Toad), and sometimes they can get complex, involving lots of unions, joins, and subqueries, and/or requiring dynamic SQL, and/or procedural logic. PL/SQL is custom made for handling these situations, but as a language it does not compare to C#, and even if it did, it's tooling does not, and if even that did, forcing yet another language on the team is something to be avoided whenever possible. Experience has shown me that using SQL for the set based processing, coupled with C# for the procedural processing, is a powerful combination indeed, and far more readable, maintainable and enhanceable than PL/SQL. So, we end up with a number of small C# programs which typically would construct a SQL query string piece by piece and/or run several queries and process them as needed. This kind of code could easily be a disaster, but a good developer can make this work out quite well, and end up with very readable code. So, I don't think it's a bad way to code for smaller DB focused projects. My main question is, how best to create and package all these little C# programs that run ad hoc queries and reports against the database? Right now I create little report objects in a DLL, developed and tested with NUnit, but then I continue to use NUnit as the GUI to select and run them. NUnit just happens to provide a nice GUI for this kind of thing, even after testing has been completed. I'm also interested in suggestions for reporting apps generally. I feel like there is a missing piece or product. The perfect tool would allow writing and running C# inside of Toad, or SQL inside of Visual Studio, along with simple reporting facilities. All ideas will be appreciated, but let's make the assumption that PL/SQL is not the solution.

    Read the article

  • I have data about deadlocks, but I can't understand why they occur

    - by Alex
    I am receiving a lot of deadlocks in my big web application. http://stackoverflow.com/questions/2941233/how-to-automatically-re-run-deadlocked-transaction-asp-net-mvc-sql-server Here I wanted to re-run deadlocked transactions, but I was told to get rid of the deadlocks - it's much better, than trying to catch the deadlocks. So I spent the whole day with SQL Profiler, setting the tracing keys etc. And this is what I got. There's a Users table. I have a very high usable page with the following query (it's not the only query, but it's the one that causes troubles) UPDATE Users SET views = views + 1 WHERE ID IN (SELECT AuthorID FROM Articles WHERE ArticleID = @ArticleID) And then there's the following query in ALL pages: User = DB.Users.SingleOrDefault(u => u.Password == password && u.Name == username); That's where I get User from cookies. Very often a deadlock occurs and this second Linq-to-SQL query is chosen as a victim, so it's not run, and users of my site see an error screen. I read a lot about deadlocks... And I don't understand why this is causing a deadlock. So obviously both of this queries run very often. At least once a second. Maybe even more often (300-400 users online). So they can be run at the same time very easily, but why does it cause a deadlock? Please help. Thank you

    Read the article

  • [C#]How to introduce retry logic into LINQ to SQL to deal with timeouts?

    - by codemonkie
    I need to find ways to add retry mechanism to my DB calls in case of timeouts, LINQ to SQL is used to call some sprocs in my code... using (MyDataContext dc = new MyDataContext()) { int result = -1; //denote failure int count = 0; while ((result < 0) && (count < MAX_RETRIES)) { result = dc.myStoredProc1(...); count++; } result = -1; count = 0; while ((result < 0) && (count < MAX_RETRIES)) { result = dc.myStoredProc2(...); count++; } ... ... } Not sure if the code above is right or posed any complications. It'll be nice to throw an exception after MAX_RETRIES has reached, but I dunno how and where to throw them appropriately :-) Any helps appreciated.

    Read the article

  • Identical files from different servers. Why might IE 8 display them differently?

    - by jasongetsdown
    I'm working on a site that will go on my company's intranet. I developed it locally on my computer, checking it in different browsers and on colleague's computers, and when it was done I handed it off to IT. They put identical copies on a staging server, and on the production server. This is a site built only with html, javascript, and css. No server side scripting. It also uses a DWF viewer plugin from Autodesk. It is a single standalone page (not part of a CMS) that allows users to load drawings into the viewer and then click to see info from a database of space info saved in a series of js arrays (the space DB software spits out a js file with all the info listed in array literals, creating a crap ton of global variables - ugh, but I digress). When I followed their links (using IE 8) the version on the staging server looked as expected, but the layout is hosed on the version from the production server. Specifically, it seems like a div that is supposed to flow to the right of a div that is float: left is displaying below the floated div at full width, as though it was clear: left (which it is not). It also has the wrong height. I downloaded the files from each and they are identical to my local version. Frustrated, I cleared my browser's cache, restarted my computer, checked it on a colleague's computer who also has IE 8. All the same issue. Staging server good. Production server bad. Finally I uninstalled IE 8 and looked at it in IE 6. Both versions looked fine. So, to recap. Two different servers. No server side scripting. Identical files. One browser agrees they are identical, the other does not. What could cause this?

    Read the article

  • GridView to excel after create send mail c#

    - by Diego Bran
    I want to send a .xlsx , first I created (It has html code in it) then I used a SMTP server to send it , it does attach the file but when I tried to open it " It says that the file is corrupted etc" any help? Here is my code try { System.IO.StringWriter sw = new System.IO.StringWriter(); System.Web.UI.HtmlTextWriter htw = new System.Web.UI.HtmlTextWriter(sw); // Render grid view control. gvStock.RenderControl(htw); // Write the rendered content to a file. string renderedGridView = sw.ToString(); File.WriteAllText(@"C:\test\ExportedFile.xls", renderedGridView); // File.WriteAllText(@"C:\test\ExportedFile.xls", p1); } catch (Exception e) { Response.Write(e.Message); } try { MailMessage mail = new MailMessage(); SmtpClient SmtpServer = new SmtpClient("server"); mail.From = new MailAddress("[email protected]"); mail.To.Add("[email protected]"); mail.Subject = "Test Mail - 1"; mail.Body = "mail with attachment"; Attachment data = new Attachment("C:/test/ExportedFile.xls"); mail.Attachments.Add(data); SmtpServer.Port = 25; SmtpServer.Credentials = new System.Net.NetworkCredential("user", "pass"); // SmtpServer.EnableSsl = true; SmtpServer.UseDefaultCredentials = false; SmtpServer.Send(mail); } catch( Exception e) { Response.Write(e.Message); }

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >