Search Results

Search found 1620 results on 65 pages for 'proc'.

Page 57/65 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • How to get XML element/attribute name in SQL Server 2005

    - by OG Dude
    Hi, I have a simple procedure in SQL Server 2005 which takes some XML as input. The element attributes correspond to field names in tables. I'd like to be able to determine <elementName>, <attribNameX> dynamically as to avoid having to hardcode them into the procedure. How can I do this? The XML looks like this: <ROOT> <elementName attribName1 = "xxx" attribName2 = "yyy"/> <elementName attribName1 = "aaa" attribName2 = "bbb"/> ... </ROOT> The stored procedure like this: CREATE PROC dbo.myProc ( @XMLInput varchar(1000) ) AS BEGIN SET NOCOUNT ON DECLARE @XMLDocHandle int EXEC sp_xml_preparedocument @XMLDocHandle OUTPUT, @XMLInput SELECT someTable.someCol FROM dbo.someTable JOIN OPENXML (@XMLDocHandle, '/ROOT/elementName',1) WITH (attrib1Name int, attrib2Name int) AS XMLData ON someTable.attribName1 = XMLData.attribName1 AND someTable.attribName2 = XMLData.attribName2 EXEC sp_xml_removedocument @XMLDocHandle END GO The question has been asked here before but maybe there is a cleaner solution. Additionally, I'd like to pass the tablename as a parameter as well - I read some stuff arguing that this is bad style - so what would be a good solution for having a dynamic tablename? Thanks a lot in advance, /David

    Read the article

  • is it possible to turn off vdso on glibc side?

    - by heroxbd
    I am aware that passing vdso=0 to kernel can turn this feature off, and that the dynamic linker in glibc can automatic detect and use vdso feature from kernel. Here I met with this problem. There is a RHEL 5.6 box (kernel 2.6.18-238.el5) in my institution where I only have a normal user access, probably suffering from RHEL bug 673616. As I compile a toolchain of linux-headers-3.9/gcc-4.7.2/glibc-2.17/binutils-2.23 on top of it, gcc bootstrap fails in cc1 in stage2 cannnot be run Program received signal SIGSEGV, Segmentation fault. 0x00002aaaaaaca6eb in ?? () (gdb) info sharedlibrary From To Syms Read Shared Object Library 0x00002aaaaaaabba0 0x00002aaaaaac3249 Yes (*) /home/benda/gnto/lib64/ld-linux-x86-64.so.2 0x00002aaaaacd29b0 0x00002aaaaace2480 Yes (*) /home/benda/gnto/usr/lib/libmpc.so.3 0x00002aaaaaef2cd0 0x00002aaaaaf36c08 Yes (*) /home/benda/gnto/usr/lib/libmpfr.so.4 0x00002aaaab14f280 0x00002aaaab19b658 Yes (*) /home/benda/gnto/usr/lib/libgmp.so.10 0x00002aaaab3b3060 0x00002aaaab3b3b50 Yes (*) /home/benda/gnto/lib/libdl.so.2 0x00002aaaab5b87b0 0x00002aaaab5c4bb0 Yes (*) /home/benda/gnto/usr/lib/libz.so.1 0x00002aaaab7d0e70 0x00002aaaab80f62c Yes (*) /home/benda/gnto/lib/libm.so.6 0x00002aaaaba70d40 0x00002aaaabb81aec Yes (*) /home/benda/gnto/lib/libc.so.6 (*): Shared library is missing debugging information. and a simple program #include <sys/time.h> #include <stdio.h> int main () { struct timeval tim; gettimeofday(&tim, NULL); return 0; } get segment fault in the same way if compiled against glibc-2.17 and xgcc from stage1. Both cc1 and the test program can be run on another running RHEL 5.5 (kernel 2.6.18-194.26.1.el5) with gcc-4.7.2/glibc-2.17/binutils-2.23 as normal user. I cannot simply upgrade the box to a newer RHEL version, nor could I turn VDSO off via sysctl or proc. The question is, is there a way to compile glibc so that it turns off VDSO unconditionally?

    Read the article

  • Can I use POSIX signals in my Perl program to create event-driven programming?

    - by Shiftbit
    Is there any POSIX signals that I could utilize in my Perl program to create event-driven programming? Currently, I have multi-process program that is able to cross communicate but my parent thread is only able to listen to listen at one child at a time. foreach (@proc) { sysread(${$_}{'read'}, my $line, 100); #problem here chomp($line); print "Parent hears: $line\n"; } The problem is that the parent sits in a continual wait state until it receives it a signal from the first child before it can continue on. I am relying on 'pipe' for my intercommunication. My current solution is very similar to: http://stackoverflow.com/questions/2558098/how-can-i-use-pipe-to-facilitate-interprocess-communication-in-perl If possible I would like to rely on a $SIG{...} event or any non-CPAN solution. Update: As Jonathan Leffler mentioned, kill can be used to send a signal: kill USR1 = $$; # send myself a SIGUSR1 My solution will be to send a USR1 signal to my child process. This event tells the parent to listen to the particular child. child: kill USR1 => $parentPID if($customEvent); syswrite($parentPipe, $msg, $buffer); #select $parentPipe; print $parentPipe $msg; parent: $SIG{USR1} = { #get child pid? sysread($array[$pid]{'childPipe'}, $msg, $buffer); }; But how do I get my the source/child pid that signaled the parent? Have the child Identify itself in its message. What happens if two children signal USR1 at the same time?

    Read the article

  • Interesting questions related to lighttpd on Amazon EC2

    - by terence410
    This problem appeared today and I have no idea what is going on. Please share you ideas. I have 1 EC2 DB server (MYSQL + NFS File Sharing + Memcached). And I have 3 EC2 Web servers (lighttpd) where it will mounted the NFS folders on the DB server. Everything going smoothly for months but suddenly there is an interesting phenomenon. In every 8 minutes to 10 minutes, PHP file will be unreachable. This will last about 1 minute and then back to normal. Normal files like .html file are unaffected. All servers have the same problem exactly at the same time. I have spent one whole day to analysis the reason. Finally, I find out when the problem appear, the file descriptor of lighttpd suddenly increased a lot. I used ls /proc/1234/fd | wc -l to check the number of fd. The # of fd is around 250 in normal time. However, when the problem appeared, it will be raised to 1500 and then back to normal. It sounds funny, right? Do you have any idea what's going on? ======================== The CPU graph of one of the web server.

    Read the article

  • Has any one used client_side_validations gem with Chosen.js dropdown?

    - by Abid
    I am using chosen.js (http://harvesthq.github.com/chosen/). I was wondering if anyone has been able to use chosen select boxes and client_side_validations together. The issue is that when we use chosen it hides the original select element and renders its own dropdown instead, and when we focus out the validation isn't called and also when the validation message is shown it is shown with the original select element so positioning of the error isnt also correct. What could be a good way to handle this, My be we can change some code inside ActionView::Base.field_error_proc which currently looks something like ActionView::Base.field_error_proc = Proc.new do |html_tag, instance| unless html_tag =~ /^<label/ %{<div class="field_with_errors">#{html_tag}<label for="#{instance.send(:tag_id)}" class="message">#{instance.error_message.first}</label></div>}.html_safe else %{<div class="field_with_errors">#{html_tag}</div>}.html_safe end end Any ideas ? Edit 1: I have the following solution that is working for me now. applied a class "chzn-dropdown" to all my selects that were being displayed by chosen used the following callback provided by client_side_validations Gem clientSideValidations.callbacks.element.fail = function(element, message, callback) { if (element.data('valid') !== false) { if(element.hasClass('dropdown')){ chzn_element = $('#'+element.attr('id')+'_chzn'); console.log(chzn_element); chzn_element.append(""+message+""); } else{ callback(); } } } Thanks

    Read the article

  • How to have a where clause on an insert or an update in Linq to Sql?

    - by Kelsey
    I am trying to convert the following stored proc to a LinqToSql call (this is a simplied version of the SQL): INSERT INTO [MyTable] ([Name], [Value]) SELECT @name, @value WHERE NOT EXISTS(SELECT [Value] FROM [MyTable] WHERE [Value] = @value) The DB does not have a constraint on the field that is getting checked for so in this specific case the check needs to be made manually. Also there are many items constantly being inserted as well so I need to make sure that when this specific insert happens there is no dupe of the value field. My first hunch is to do the following: using (TransactionScope scope = new TransactionScope()) { if (Context.MyTables.SingleOrDefault(t => t.Value == in.Value) != null) { MyLinqModels.MyTable t = new MyLinqModels.MyTable() { Name = in.Name, Value = in.Value }; // Do some stuff in the transaction scope.Complete(); } } This is the first time I have really run into this scenario so I want to make sure I am going about it the right way. Does this seem correct or can anyone suggest a better way of going about it without having two seperate calls? Edit: I am running into a similar issue with an update: UPDATE [AnotherTable] SET [Code] = @code WHERE [ID] = @id AND [Code] IS NULL How would I do the same check with Linqtosql? I assume I need to do a get and then set all the values and submit but what if someone updates [Code] to something other than null from the time I do the get to when the update executes? Same problem as the insert...

    Read the article

  • How to pass a Dictionary variable to another procedure

    - by salvationishere
    I am developing a C# VS2008/SQL Server website application. I've never used the Dictionary class before, but I am trying to replace my Hashtable with a Dictionary variable. Here is a portion of my aspx.cs code: ... Dictionary<string, string> openWith = new Dictionary<string, string>(); for (int col = 0; col < headers.Length; col++) { @temp = (col + 1); @tempS = @temp.ToString(); @tempT = "@col" + @temp.ToString(); ... openWith.Add(@tempT, headers[col]); } ... for (int r = 0; r < myInputFile.Rows.Count; r++) { resultLabel.Text = ADONET_methods.AppendDataCT(myInputFile, openWith); } But this is giving me a compiler error on this last line: Argument '2': cannot convert from 'System.Collections.Generic.Dictionary' to 'string' How do I pass the entire openWith variable to AppendDataCT? AppendDataCT is the method that calls my SQL stored proc. I want to pass in the whole row where each row has a unique set of values that I want to add to my database table. For example, if each row requires values for cells A, B, and C, then I want to pass these 3 values to AppendDataCT, where all of these values are strings. How do I do this with Dictionary?

    Read the article

  • iPod library song path access

    - by Narendra Kumar
    I studied a lot but did not find any good answer. My problem is i am calculating beats per minute of song.I used Bass api for that, now problem is i am able to get bpm of a file which i have in my resource folder but i have to get bpm of all songs of iPod library. I am getting path of song from MPMediaItemPropertyAssetURL property of MpMediaItem but when passing this one in api api is saying stream cant load BASS_StreamCreateFile(). In my point of view i am not getting right path of song. How can we access valid path? Did any one access ipod library song with external api? Please help me . Thanks CODE IS THIS NSURL *assetURL = [song valueForProperty:MPMediaItemPropertyAssetURL]; NSString *respath = [NSString stringWithFormat:@"%@",[assetURL absoluteString]]; BASS_SetConfig(BASS_CONFIG_IOS_MIXAUDIO, 0); // Disable mixing. To be called before BASS_Init. if (HIWORD(BASS_GetVersion()) != BASSVERSION) { NSLog(@"An incorrect version of BASS was loaded"); } // Initialize default device. if (!BASS_Init(-1, 44100, 0, NULL, NULL)) { //textView.text = [NSString stringWithFormat:@"%@ CAN'T Load Stream",textView.text]; } DWORD chan1; if(!(chan1=BASS_StreamCreateFile(FALSE, [respath UTF8String], 0, 0, BASS_SAMPLE_LOOP))) { NSLog(@"Can't load stream!"); textView.text = [NSString stringWithFormat:@"%@ not loading...",textView.text]; } mainStream=BASS_StreamCreateFile(FALSE, [respath cStringUsingEncoding:NSUTF8StringEncoding], 0, 0, BASS_SAMPLE_FLOAT|BASS_STREAM_PRESCAN|BASS_STREAM_DECODE); float playBackDuration=BASS_ChannelBytes2Seconds(mainStream, BASS_ChannelGetLength(mainStream, BASS_POS_BYTE)); NSLog(@"Play back duration is %f",playBackDuration); HSTREAM bpmStream=BASS_StreamCreateFile(FALSE, [respath UTF8String], 0, 0, BASS_STREAM_PRESCAN|BASS_SAMPLE_FLOAT|BASS_STREAM_DECODE); //BASS_ChannelPlay(bpmStream,FALSE); BpmValue= BASS_FX_BPM_DecodeGet(bpmStream,0.0, playBackDuration, MAKELONG(45,256), BASS_FX_BPM_MULT2| BASS_FX_BPM_MULT2 | BASS_FX_FREESOURCE, (BPMPROCESSPROC*)proc); textView.text = [NSString stringWithFormat:@"%@ %f",textView.text,BpmValue];

    Read the article

  • Optimzing TSQL code

    - by adopilot
    My job is the maintain one application which heavy use SQL server (MSSQL2005). Until now middle server stores TSQL codes in XML and send dynamic TSQL queries without using stored procs. As I am able change those XML queries I want to migrate most of my queries to stored procs. Question is folowing: Most of my queries have same Where conditions against one table Sample: Select ..... from .... where .... and (a.vrsta_id = @vrsta_id or @vrsta_id = 0) and (a.podvrsta_id = @podvrsta_id or @podvrsta_id = 0) and (a.podgrupa_2 = @podgrupa2_id or @podgrupa2_id = 0) and ( (a.id in (select art_id from osobina_veze where podosobina_id in (select ado from dbo.fn_ado_param_int(@podosobina)) group by art_id having count(art_id)= @podosobina_count )) or ('0' = @podosobina) ) They also have same where conditions on other table. How I should organize my code ? What is proper way ? Should I make table valued function that I will use in all queries or use #Temp tables and simple inner join my query to that each time when proc executing? or use #temp filed by table valued function ? or leave all queries with this large where clause and hope that index is going to do their jobs. or use WITH(statement)

    Read the article

  • DataReader already open when using LINQ

    - by Jamie Dixon
    I've got a static class containing a static field which makes reference to a wrapper object of a DataContext. The DataContext is basically generated by Visual Studio when we created a dbml file & contains methods for each of the stored procedures we have in the DB. Our class basically has a bunch of static methods that fire off each of these stored proc methods & then returns an array based on a LINQ query. Example: public static TwoFieldBarData[] GetAgesReportData(string pct) { return DataContext .BreakdownOfUsersByAge(Constants.USER_MEDICAL_PROFILE_KEY, pct) .Select(x => new TwoFieldBarData(x.DisplayName, x.LeftValue, x.RightValue, x.TotalCount)) .ToArray(); } Every now and then, we get the following error: There is already an open DataReader associated with this Command which must be closed firs This is happening intermittently and I'm curious as to what is going on. My guess is that when there's some lag between one method executing and the next one firing, it's locking up the DataContext and throwing the error. Could this be a case for wrapping each of the DataContext LINQ calls in a lock(){} to obtain exclusivity to that type and ensure other requests are queued?

    Read the article

  • Dataset bind to Gridview within WCF REST retrieval method and Linq to Sql

    - by user643794
    I used a WCF REST template to build a WCF service library to create PUT and GET calls. PUT method works fine sending my blob to a database. On the GET, I want to be able to access the web service directly and display the results from a stored procedure as a dataset and bind this to a gridview. The stored procedure is a simple select statement, returning three of the four columns from the table. I have the following: [WebGet(UriTemplate = "/?name={name}", ResponseFormat = WebMessageFormat.Xml)] public List<Object> GetCollection(string name) { try { db.OpenDbConnection(); // Call to SQL stored procedure return db.GetCustFromName(name); } catch (Exception e) { Log.Error("Stored Proc execution failed. ", e); } finally { db.CloseDbConnection(); } return null; } I also added Linq to SQL class to include my database table and stored procedures access. I also created the Default.aspx file in addition to the other required files. protected void Page_Load(object sender, EventArgs e) { ServiceDataContext objectContext = new ServiceDataContext(); var source = objectContext.GetCustFromName("Tiger"); Menu1.DataSource = source; Menu1.DataBind(); } But this gives me The entity type '' does not belong to any registered model. Where should the data binding be done? What should be the return type for GetCollection()? I am stuck with this. Please provide help on how to do this.

    Read the article

  • Execute Stored Procedure from Classic ASP

    - by Jaco Pretorius
    For some fantastic reason I find myself debugging a problem in a Classic ASP page (at least 10 years of my life lost in the last 2 days). I'm trying to execute a stored procedure which contains some OUT parameters. The problem is that one of the OUT parameters is not being populated when the stored procedure returns. I can execute the stored proc from SQL management studio (this is 2008) and all the values are being set and returned exactly as expected. declare @inVar1 varchar(255) declare @inVar2 varchar(255) declare @outVar1 varchar(255) declare @outVar2 varchar(255) SET @inVar2 = 'someValue' exec theStoredProc @inVar1 , @inVar2 , @outVar1 OUT, @outVar2 OUT print '@outVar1=' + @outVar1 print '@outVar2=' + @outVar2 Works great. Fantastic. Perfect. The exact values that I'm expecting are being returned and printed out. Right, since I'm trying to debug a Classic ASP page I copied the code into a VBScript file to try and narrow down the problem. Here is what I came up with: Set Conn = CreateObject("ADODB.Connection") Conn.Open "xxx" Set objCommandSec = CreateObject("ADODB.Command") objCommandSec.ActiveConnection = Conn objCommandSec.CommandType = 4 objCommandSec.CommandText = "theStoredProc " objCommandSec.Parameters.Refresh objCommandSec.Parameters(2) = "someValue" objCommandSec.Execute MsgBox(objCommandSec.Parameters(3)) Doesn't work. Not even a little bit. (Another ten years of my life down the drain) The third parameter is simply NULL - which is what I'm experiencing in the Classic ASP page as well. Could someone shed some light on this? Am I completely daft for thinking that the classic ASP code would be the same as the VBScript code? I think it's using the same scripting engine and syntax so I should be ok, but I'm not 100% sure. The result I'm seeing from my VBScript is the same as I'm seeing in ASP.

    Read the article

  • Ruby Design Problem for SQL Bulk Inserter

    - by crunchyt
    This is a Ruby design problem. How can I make a reusable flat file parser that can perform different data scrubbing operations per call, return the emitted results from each scrubbing operation to the caller and perform bulk SQL insertions? Now, before anyone gets narky/concerned, I have written this code already in a very unDRY fashion. Which is why I am asking any Ruby rockstars our there for some assitance. Basically, everytime I want to perform this logic, I create two nested loops, with custom processing in between, buffer each processed line to an array, and output to the DB as a bulk insert when the buffer size limit is reached. Although I have written lots of helpers, the main pattern is being copy pasted everytime. Not very DRY! Here is a Ruby/Pseudo code example of what I am repeating. lines_from_file.each do |line| line.match(/some regex/).each do |sub_str| # Process substring into useful format # EG1: Simple gsub() call # EG2: Custom function call to do complex scrubbing # and matching, emitting results to array # EG3: Loop to match opening/closing/nested brackets # or other delimiters and emit results to array end # Add processed lines to a buffer as SQL insert statement @buffer << PREPARED INSERT STATEMENT # Flush buffer when "buffer size limit reached" or "end of file" if sql_buffer_full || last_line_reached @dbc.insert(SQL INSERTS FROM BUFFER) @buffer = nil end end I am familiar with Proc/Lambda functions. However, because I want to pass two separate procs to the one function, I am not sure how to proceed. I have some idea about how to solve this, but I would really like to see what the real Rubyists suggest? Over to you. Thanks in advance :D

    Read the article

  • Rack, FastCGI, Lighttpd configuration

    - by zacsek
    Hi! I want to run a simple application using Rack, FastCGI and Lighttpd, but I cannot get it working. I get the following error: /usr/lib/ruby/1.8/rack/handler/fastcgi.rb:23:in `initialize': Address already in use - bind(2) (Errno::EADDRINUSE) from /usr/lib/ruby/1.8/rack/handler/fastcgi.rb:23:in `new' from /usr/lib/ruby/1.8/rack/handler/fastcgi.rb:23:in `run' from /www/test.rb:7 Here is the application: #!/usr/bin/ruby app = Proc.new do |env| [200, {'Content-Type' => 'text/plain'}, "Hello World!"] end require 'rack' Rack::Handler::FastCGI.run app, :Port => 4000 ... and the lighttpd.conf: server.modules += ( "mod_access", "mod_accesslog", "mod_fastcgi" ) server.port = 80 server.document-root = "/www" mimetype.assign = ( ".html" => "text/html", ".txt" => "text/plain", ".jpg" => "image/jpeg", ".png" => "image/png" ) index-file.names = ( "test.rb" ) fastcgi.debug = 1 fastcgi.server = ( ".rb" => (( "host" => "127.0.0.1", "port" => 4000, "bin-path" => "/www/test.rb", "check-local" => "disable", "max-procs" => 1 )) ) Can someone help me? What am I doing wrong?

    Read the article

  • entity framework vNext wish list

    - by Fred Yang
    I have been intensively studying and use ef4 in my project. I do feel the improvement that it has over version 1. But I found that I have something I cannot get around easily. Here is a list I want it to be better in ef vNext. the model designer should allow multiple view of the same model, so that I don't need cram all my entity into a single view. respect user's manual edit of edmx. Currently, the some database view object simply can not be imported to the model because the designer "smartly" think that the view does not have a primary key, so that I have to manually edit the edmx to correct designer's behavior. But in the next "update from database" task, designer will revert my customization. For now, I simply fallback to manually edit the edmx file at all, or I have to use compare tool to keep the new update, and rollback and put the new update into my old edmx file manually. Designer should be improved to allow default behavior and user's manual control. I want control not to let the designer refresh the change of imported object. support user defined table function. linq is about Composability, stored proc dos not support composability. I wish I could use user defined table function which support this. What are you wishes for EF vNext?

    Read the article

  • Asp.net mvc retriev images from db and display on Page

    - by Trey Carroll
    //Inherits="System.Web.Mvc.ViewPage<FilmFestWeb.Models.ListVideosViewModel>" <h2>ListVideos</h2> <% foreach(BusinessObjects.Video vid in Model.VideoList){%> <div class="videoBox"> <%= Html.Encode(vid.Name) %> <img src="<% vid.ThumbnailImage; %>" /> </div> <%} %> //ListVideosViewModel public class ListVideosViewModel { public IList<Video> VideoList { get; set; } } //Video public class Video { public long VideoId { get; set; } public long TeamId { get; set; } public string Name { get; set; } public string Tags { get; set; } public string TeamMembers { get; set; } public string TranscriptFileName { get; set; } public string VideoFileName { get; set; } public int TotalNumRatings { get; set; } public int CumulativeTotalScore { get; set; } public string VideoUri { get; set; } public Image ThumbnailImage { get; set; } } I am getting the "red x" that I usually associate with image file not found. I have verified that my database table shows <binary data> after the stored proc that uploads the image executes. Any insight or advice would be greatly appreciated.

    Read the article

  • Colorizing as SAS Map

    - by user601828
    I'm trying to generate a map in SAS where I would like to to make gradual color changes which correspond to my results. So the higher the counts the more intense the color changes. Also I would like to add state labels to the map. Here is my code, so far it produces a white map with varying degress of blue blocks. I'd like the states colored in intense colors, like red, bright pink,brilliant, blues and greens. Can anyone please help me modify the code to add state labels and colorize the map, and below the map add a table summarizing the statistics, like counts and percentages. Thanks in advance. goptions gunit=pct cback=white htitle=4 htext=3 colors=(PAGY LIY STY DEGY dark_yellow very_dark_yellow ) ; title "My Map Results"; proc gmap map=maps.us data=My_data all; id state; block person_per_event/levels=6; choro person_per_event/levels=6; run; quit; I looked at his page before for example if I wanted to make a map like this one http://robslink.com/SAS/democd61/election_2012.htm with my data. I tried modifying the code that he gives on the link, but wasnt very successful. I would like to use that map along with the state labels and keep the colors and represent my data with blocks in the corresponding locations with city and state, and high level counts. The rest of the summary statistics I would like to summarize in a colorful table next to the map, like a dashboard of sorts. Appreciate any help in advance. Thanks, -rachel

    Read the article

  • TransactionScope and Transactions

    - by Mike
    In my C# code I am using TransactionScope because I was told not to rely that my sql programmers will always use transactions and we are responsible and yada yada. Having said that It looks like TransactionScope object Rolls back before the SqlTransaction? Is that possible and if so what is the correct methodology for wrapping a TransactionScope in a transaction. Here is the sql test CREATE PROC ThrowError AS BEGIN TRANSACTION --SqlTransaction SELECT 1/0 IF @@ERROR<> 0 BEGIN ROLLBACK TRANSACTION --SqlTransaction RETURN -1 END ELSE BEGIN COMMIT TRANSACTION --SqlTransaction RETURN 0 END go DECLARE @RESULT INT EXEC @RESULT = ThrowError SELECT @RESULT And if I run this I get just the divide by 0 and return -1 Call from the C# code I get an extra error message Divide by zero error encountered. Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION tatement is missing. Previous count = 1, current count = 0. If I give the sql transaction a name then Cannot roll back SqlTransaction. No transaction or savepoint of that name was found. Transaction count after EXECUTE indicates that a COMMIT or ROLLBACK TRANSACTION statement is missing. Previous count = 1, current count = 2. some times it seems the count goes up, until the app completely exits The c# is just using (TransactionScope scope = new TransactionScope()) { ... Execute Sql scope.Commit() }

    Read the article

  • Disable validation in an object in Ruby on Rails

    - by J. Pablo Fernández
    I have an object which whether validation happens or not should depend on a boolean, or in another way, validation is optional. I haven't found a clean way to do it. What I'm currently doing is this (disclaimer: you cannot unsee, leave this page if you are too sensitive): def valid? if perform_validation super else super # Call valid? so that callbacks get called and things like encrypting passwords and generating salt in before_validation actually happen errors.clear # but then clear the errors true # and claim ourselves to be valid. This is super hacky! end end Any better ways? Before you point to the :if argument of many validations, this is for a user model which is using authlogic so it has a lot of validation rules. You can stop reading here if you belive me. If you don't, authlogic already sets some :ifs like: :if => :email_changed? which I have to turn into :if => Proc.new {|user| user.email_changed? and user.perform_validation} and in some other cases, since I'm also using authlogic-oid (OpenID) I just don't have control over the :if, authlogic-oid sets it in a way I cannot change it (in time) without further monkey patching. So I have to override seemingly unrelated functions, catch exceptions if a method doesn't exist, etc. The previous hacky solution if the best of my two attempts.

    Read the article

  • Debug not working in Monodroid with a Galaxy Nexus

    - by MaxM
    I'm starting to work with Monodroid testing on a Galaxy Nexus from MonoDevelop for Mac. Running the default Android project without debugging works. But if I try to debug it either says this in the Application Output pane: Error trying to detect already running process Or it outputs the following to logcat: I/ActivityManager( 448): Start proc monotest.monotest for activity monotest.monotest/monotest.Activity1: pid=3075 uid=10068 gids={3003} D/dalvikvm( 3063): GC_CONCURRENT freed 98K, 89% free 478K/4096K, paused 0ms+1ms I/dalvikvm( 3075): Turning on JNI app bug workarounds for target SDK version 8... V/PhoneStatusBar( 524): setLightsOn(true) I/ActivityThread( 3075): Pub monotest.monotest.__mono_init__: mono.MonoRuntimeProvider D/dalvikvm( 3075): Trying to load lib /data/data/monotest.monotest/lib/libmonodroid.so 0x41820850 D/dalvikvm( 3075): Added shared lib /data/data/monotest.monotest/lib/libmonodroid.so 0x41820850 D/OpenGLRenderer( 683): Flushing caches (mode 1) E/mono ( 3075): WARNING: The runtime version supported by this application is unavailable. E/mono ( 3075): Using default runtime: v2.0.50727 D/OpenGLRenderer( 683): Flushing caches (mode 0) I/monodroid-gc( 3075): environment supports jni NewWeakGlobalRef I/mono ( 3075): Stacktrace: I/mono ( 3075): D/AndroidRuntime( 3093): D/AndroidRuntime( 3093): >>>>>> AndroidRuntime START com.android.internal.os.RuntimeInit <<<<<< D/AndroidRuntime( 3093): CheckJNI is OFF D/AndroidRuntime( 3093): Calling main entry com.android.commands.am.Am D/dalvikvm( 3021): GC_CONCURRENT freed 359K, 3% free 15630K/16071K, paused 2ms+4ms D/Zygote ( 119): Process 3075 terminated by signal (11) I/ActivityManager( 448): Process monotest.monotest (pid 3075) has died. I tried using another device (a Galaxy Tab) and it worked fine. I also tried the suggestion from here and it didn't help.

    Read the article

  • How to debug problems in Linux kernel module `init()`?

    - by Kimvais
    I am using remote (k)gdb to debug a problem in a module that causes a panic when loaded e.g. when init() is called. The stack trace just shows that do_one_initcall(mod->init) causes the crash. In order to get the symbol file loaded in the gdb, I need to get the address of the module text section, and to get that I need to get the module loaded. Because the insmod in busybox (1.16.1) doesn't support -m so I'm stuck to grep modulename /proc/modules + adding the offset from nm to figure out the address. So I'm facing a sort a of a chicken and an egg problem here - to be able to debug the module loading, I need to get the module loaded - but in order to get the module loaded, I need to debug the problem... So I am currently thinking about two options - is there a way to get the address information either: by printk() in the module init code by printk() somewhere in the kernel code all this prior to calling the mod->init() - so I could place a breakpoint there, load the symbol file, hit c and see it crash and burn...

    Read the article

  • Regarding some Update Stored procedure

    - by Serenity
    I have two Tables as follows:- Table1:- ------------------------------------- PageID|Content|TitleID(FK)|LanguageID ------------------------------------- 1 |abc |101 |1 2 |xyz |102 |1 -------------------------------------- Table2:- ------------------------- TitleID|Title |LanguageID ------------------------- 101 |Title1|1 102 |Title2|1 ------------------------ I don't want to add duplicates in my Table1(Content Table). Like..there can be no two Pages with the same Title. What check do I need to add in my Insert/Update Stored Procedure ? How do I make sure duplicates are never added. I have tried as follows:- CREATE PROC InsertUpdatePageContent ( @PageID int, @Content nvarchar(2000), @TitleID int ) AS BEGIN IF(@PageID=-1) BEGIN IF(NOT EXISTS(SELECT TitleID FROM Table1 WHERE LANGUAGEID=@LANGUAGEID)) BEGIN INSERT INTO Table1(Content,TitleID) VALUES(@Content,@TitleID) END END ELSE BEGIN IF(NOT EXISTS(SELECT TitleID FROM Table1 WHERE LANGUAGEID=@LANGUAGEID)) BEGIN UPDATE Table1 SET Content=@Content,TitleID=@TitleID WHERE PAGEID=@PAGEID END END END Now what is happening is that it is inserting new records alright and won't allow duplicates to be added but when I update its giving me problem. On my aspx Page I have a drop down list control that is bound to DataSource that returns Table 2(Title Table) and I have a text box in which user types Page's content to be stored. When I update, like lets say I have a row in my Table 1 as shown above with PageID=1. Now when I am updating this row, like I didn't change the Title from the drop down and only changed Content in the text box, its not updating the record ..and when Stored procedure's Update Query does not execute it displays a Label that says "Page with this title exists already." So whenever I am updating an existing record that label is displayed on screen.How do I change that IF condition in my Update Stored procedure?? EDIT:- @gbn :: Will that IF condition work in case of update? I mean lets say I am updating the Page with TitleID=1, I changed its content, then when I update, it's gonna execute that IF condition and it still won't update coz TitleID=1 already exits!It will only update if TitleID=1 is not there in Table1. Isn't it? Guess I am getting confused. Please answer.Thanks.

    Read the article

  • CREATE VIEW called multiple times not creating all views

    - by theninepoundhammer
    Noticing strange behavior in SQL 2005, both Express and Enterprise Edition: In my code I need to loop through a series of values (about five in a row), and for each value, I need to insert the value into a table and dynamically create a new view using that value as part of the where clause and the name of the view. The code runs pretty quickly, but what I'm noticing is that all the values are inserted into the table correctly but only the LAST view is being created. Every time. For example, if the values I'm using are X1, X2, X3, X4, and X5, I'll run the process, open up Mgmt Studio, and see five rows in the table with the correct five values, but only one view named MyView_x5 that has the correct WHERE clause. At first, I had this loop in an SSIS package as part of a larger data flow. When I started noticing this behavior, I created a stored proc that would create the CREATE VIEW statement dynamically after the insert and called EXECUTE to create the view. Same result. Finally, I created some C# code using the Enterprise Library DAAB, and did the insert and CREATE VIEW statements from my DLL. Same result every time. Most recently, I turned on Profiler while running against the Enterprise Edition and was able to verify that the Batch Started and Batch Completed events were being fired off for each instance of the view. However, like I said, only the last view is actually being created. Does anyone have any idea why this might be happening? Or any suggestions about what else to check or profile? I've profiled for error messages, exceptions, etc. but don't see any in my trace file. My express edition is 9.00.1399.06. Not sure about the Enterprise edition but think it is SP2.

    Read the article

  • Creating a ComboBox with one or more separator items?

    - by Steve
    I'm using Delphi7 and I'd like to have a ComboBox with separator items (Just like in popup menus). I've seen this beautifully implemented in Mozilla Sunbird (I know, it's not Delphi...) the following way: The separator item is a simple gray line drawn in the center of the item If you hover over the separator with the mouse, the selection doesn't appear If the user clicks the separator, it's not selected either AND the combobox doesn't closeup. No. 1 could be implemented using DrawItem. I could live without No. 2 because I have no idea about that. For No. 3 I'm asking for your help. I've figured out that straight after closing up a CBN_CLOSEUP message is sent to the combobox. I thought about hooking the window proc and if CBN_CLOSEUP is sent to a certain combobox then countering it. But I'm unsure if this is the best solution, or maybe there are other, more elegant ways? Whatever the solution is, I'd like to have a standard ComboBox which supports WinXP/Vista/7 theming properly. Thanks! Edit: For a working component please see this thread: Can you help translating this very small C++ component to Delphi?

    Read the article

  • FDs not closed in FUSE filesystem

    - by cor
    Hi, I have a problem while implementing a fuse filesystem in python. for now i just have a proxy filesystem, exactly like a mount --bind would be. But, any file created, opened, or read on my filesystem is not released (the corresponding FD is not closed) Here is an example : yume% ./ProxyFs.py `pwd`/test yume% cd test yume% ls mdr yume% echo test test yume% ls mdr test yume% ps auxwww | grep python cor 22822 0.0 0.0 43596 4696 ? Ssl 12:57 0:00 python ./ProxyFs.py /home/cor/esl/proxyfs/test cor 22873 0.0 0.0 6352 812 pts/1 S+ 12:58 0:00 grep python yume% ls -l /proc/22822/fd total 0 lrwx------ 1 cor cor 64 2010-05-27 12:58 0 - /dev/null lrwx------ 1 cor cor 64 2010-05-27 12:58 1 - /dev/null lrwx------ 1 cor cor 64 2010-05-27 12:58 2 - /dev/null lrwx------ 1 cor cor 64 2010-05-27 12:58 3 - /dev/fuse l-wx------ 1 cor cor 64 2010-05-27 12:58 4 - /home/cor/test/test yume% Does anyone have a solution to actually really close the fds of the file I use in my fs ? I'm pretty sure it's a mistake in the implementation of the open, read, write hooks but i'm stucked... Let me know if you need more details ! Thanks a lot Cor

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >