Search Results

Search found 11954 results on 479 pages for 'gets'.

Page 369/479 | < Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >

  • linq: SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • Why is F# member not found when used in subclass

    - by James Black
    I have a base type that I want to inherit from, for all my DAO objects, but this member gets the error further down about not being defined: type BaseDAO() = member v.ExecNonQuery2(conn)(sqlStr) = let comm = new MySqlCommand(sqlStr, conn, CommandTimeout = 10) comm.ExecuteNonQuery |> ignore comm.Dispose |> ignore I inherit in this type: type CreateDatabase() = inherit BaseDAO() member private self.createDatabase(conn) = self.ExecNonQuery2 conn "DROP DATABASE IF EXISTS restaurant" This is what I see when my script runs in the interactive shell: --> Referenced 'C:\Program Files\MySQL\MySQL Connector Net 6.2.3\Assemblies\MySql.Data.dll' [Loading C:\Users\jblack\Documents\Visual Studio 2010\Projects\RestaurantService\RestaurantDAO\BaseDAO.fs] namespace FSI_0106.RestaurantServiceDAO type BaseDAO = class new : unit -> BaseDAO member ExecNonQuery2 : conn:MySql.Data.MySqlClient.MySqlConnection -> sqlStr:string -> unit member execNonQuery : sqlStr:string -> unit member execQuery : sqlStr:string * selectFunc:(MySql.Data.MySqlClient.MySqlDataReader -> 'a list) -> 'a list member f : x:obj -> string member Conn : MySql.Data.MySqlClient.MySqlConnection end [Loading C:\Users\jblack\Documents\Visual Studio 2010\Projects\RestaurantService\RestaurantDAO\CreateDatabase.fs] C:\Users\jblack\Documents\Visual Studio 2010\Projects\RestaurantService\RestaurantDAO\CreateDatabase.fs(56,14): error FS0039: The field, constructor or member 'ExecNonQuery2' is not defined I am curious what I am doing wrong. I have tried not inheriting, and just instantiating the BaseDAO type in the function, but I get the same error. I started on this path because I had a property that had the same error, so it seems there may be a problem with how I am defining my BaseDAO type, but it compiles with no error, which further confuses me about this problem.

    Read the article

  • quartz.net - Can I not add a callback delegate method to JobExecutionContext?

    - by Greg
    Hi, BACKGROUND - I have a synchroisation function within my MainForm class. It gets called manually when the user pushes the SYNC button. I want to also call this synchroisation function when the scheduler triggers too, so effectively want SchedulerJob:IJob.Execute() method to be able to call it. QUESTION - How do I access the MainForm.Sychronization() method from within the SchedulerJob:IJob.Execute() method? I tried creating a delegate for this method in the MainForm class and getting it added via jobDetail.JobDataMap. However when I try I'm not sure that JobDataMap has a method to pull out a Delegate type??? private void Schedule(MainForm.SyncDelegate _syncNow) { var jobDetail = new JobDetail("MainJob", null, typeof(SchedulerJob)); jobDetail.JobDataMap["CallbackMethod"] = _syncNow; // Trigger Setup var trigger = new CronTrigger("MainTrigger"); string expression = GetCronExpression(); trigger.CronExpressionString = expression; trigger.StartTimeUtc = DateTime.Now.ToUniversalTime(); // Schedule Job & Trigger _scheduler.ScheduleJob(jobDetail, trigger); } public class SchedulerJob : IJob { public SchedulerJob() { } public void Execute(JobExecutionContext context) { JobDataMap dataMap = context.JobDetail.JobDataMap; MainForm.SyncDelegate CallbackFunction = dataMap.getDelegate["CallbackMethod"]; **// THIS METHOD DOESN'T EXIST - getDelegate()** CallbackFunction(); } } thanks

    Read the article

  • Creating futures using Apple's GCD

    - by jer
    I'm working on a library which implements the actor model on top of Grand Central Dispatch (specifically the C level API libdispatch). Basically a brief overview of my system is as such: Communication happens between actors using messages Multicast communication only (one actor to many actors) Senders and receivers are decoupled from one another using a blackboard where messages are pushed to. Messages are sent in the default queue asynchronously using dispatch_group_async() once a message gets pushed onto the blackboard. I'm trying to implement futures in the language right now, so I've created a new type which holds some information: A group of its own The value being 'returned' However, I have a problem since dispatch_block_t is of type void (^)(void) so it doesn't return anything. So my idea of in my future_new() function of setting up another group which can be used to execute a block returning a result, which I can store in my "value" member in my future_t structure, isn't going to work. The rest of the futures implementation is very clear, except it all depends on being able to get the value into the future back from the actor, acting on the message. When using the library, it would greatly reduce its usefulness if I had to ask users (and myself) to be aware when futures were going to be used by other parts of the system—It just isn't practical. I'm wondering if anyone can think of a way around this?

    Read the article

  • Can you catch exceeded allocated memory error before it kills the script?

    - by kristovaher
    The thing is that I want to catch memory problems before they happen. I have a system that gets rows from database and casts the returned associative array to a variable, but I never know what the size of the database result is is or how much memory it will take once the database request is made. This means that my software can fail simply because memory is exceeded. But I want to avoid that somehow. One of the ways is to obviously make database requests that are smaller, but what if this is not possible or what if I do not know the size of data that is returned from database? Is it possible to 'catch' situations where memory use is exceeded in PHP? Something like this: $requestOk=memory_test(function(){ return doSomething(); }); if($requestOk){ // Memory seems fine // $requestOk now has the value from memory_test() function } else { // Function would have exceeded memory } I just find it problematic that my script can just die at any moment because of memory issues. From what I know, try-catch cannot be used here because it is a fatal error. Any help would be appreciated!

    Read the article

  • Javascript callback with AJAX + jQuery

    - by Fred
    Hey! I have this jQuery code (function () { function load_page (pagename) { $.ajax({ url: "/backend/index.php/frontend/pull_page/", type: "POST", data: {page: pagename}, success: function (json) { var parsed = $.parseJSON(json); console.log(parsed); return parsed; }, error: function (error) { $('#content').html('Sorry, there was an error: <br>' + error); return false; } }); } ... var json = load_page(page); console.log(json); if (json == false) { $('body').fadeIn(); } else { document.title = json.pagename + ' | The Other Half | freddum.com'; $("#content").html(json.content); $('#header-navigation-ul a:Contains('+page+')').addClass('nav-selected'); $('body').fadeIn(); } })(); and, guessed it, it doesn't work. The AJAX fires fine, the server returns valid JSON but the console.log(json); returns undefined and the js crashes when it gets to json.pagename. The first console.log(parsed) also returns good data so it's just a problem with the return (I think). I knew I was clutching at straws and would be extremely if this worked, but it doesn't. To be honest, I don't know how to program callback functions for this situation. Any help is greatly appreciated!

    Read the article

  • javascript - rollover effect on overlapping transparent images

    - by user427969
    I want to add rollover effect on overlapping transparent images. For example: The following image is divided into 5 parts and I want to add rollover effect (different image) on each of them When O tried with div or img tag, the image is rendered as a rectangle so rollover effect is not proper. When i rollover on green part between yellow, the yellow image gets highlighted because its z-index is high. Following is the code that I tried: <body> <br /> <img src="part1.png" onclick="console.log('test1');"/> <img src="part2.png" onclick="console.log('test2');" style="position:absolute; left:30px; top: 19px;"/> <img src="part3.png" onclick="console.log('test3');" style="position:absolute; left:70px; top: 15px;"/> <img src="part4.png" onclick="console.log('test4');" style="position:absolute; left:95px; top: 16px;"/> <img src="part5.png" onclick="console.log('test5');" style="position:absolute; left:123px; top: 24px;"/> </body> images = , , , , I don't want to use jQuery, if possible.

    Read the article

  • Java equivalent of the VB Request.InputStream

    - by Android Addict
    I have a web service that I am re-writing from VB to a Java servlet. In the web service, I want to extract the body entity set on the client-side as such: StringEntity stringEntity = new StringEntity(xml, HTTP.UTF_8); stringEntity.setContentType("application/xml"); httppost.setEntity(stringEntity); In the VB web service, I get this data by using: Dim objReader As System.IO.StreamReader objReader = New System.IO.StreamReader(Request.InputStream) Dim strXML As String = objReader.ReadToEnd and this works great. But I am looking for the equivalent in Java. I have tried this: ServletInputStream dataStream = req.getInputStream(); byte[] data = new byte[dataStream.toString().length()]; dataStream.read(data); but all it gets me is an unintelligible string: data = [B@68514fec Please advise. Edit Per the answers, I have tried: ServletInputStream dataStream = req.getInputStream(); ByteArrayOutputStream buffer = new ByteArrayOutputStream(); int r; byte[] data = new byte[1024*1024]; while ((r = dataStream.read(data, 0, data.length)) != -1) { buffer.write(data, 0, r); } buffer.flush(); byte[] data2 = buffer.toByteArray(); System.out.println("DATA = "+Arrays.toString(data2)); whichs yields: DATA = [] and when I try: System.out.println("DATA = "+data2.toString()); I get: DATA = [B@15282c7f So what am I doing wrong? As stated earlier, the same call to my VB service gives me the xml that I pass in.

    Read the article

  • Can shared memory be read and validated without mutexes?

    - by Bribles
    On Linux I'm using shmget and shmat to setup a shared memory segment that one process will write to and one or more processes will read from. The data that is being shared is a few megabytes in size and when updated is completely rewritten; it's never partially updated. I have my shared memory segment laid out as follows: ------------------------- | t0 | actual data | t1 | ------------------------- where t0 and t1 are copies of the time when the writer began its update (with enough precision such that successive updates are guaranteed to have differing times). The writer first writes to t1, then copies in the data, then writes to t0. The reader on the other hand reads t0, then the data, then t1. If the reader gets the same value for t0 and t1 then it considers the data consistent and valid, if not, it tries again. Does this procedure ensure that if the reader thinks the data is valid then it actually is? Do I need to worry about out-of-order execution (OOE)? If so, would the reader using memcpy to get the entire shared memory segment overcome the OOE issues on the reader side? (This assumes that memcpy performs it's copy linearly and ascending through the address space. Is that assumption valid?)

    Read the article

  • Setting Vertical-Align for a button

    - by danish
    I have a in a user control. In that, I have added a HTML table in which there is a button. I need to have the buttons aligned to the bottom of the cell. I tried setting the property in the CSS file the style does not gets applied. What is it that I am doing wrong? ASCX file: <link href="CSSFile.css" rel="stylesheet" type="text/css" /> . . . <td> <asp:Button ID="btnOK" runat="server" Text="OK" Width="66px" CssClass="ButtonClass"/> <asp:Button ID="btnClose" runat="server" Text="Close" Width="66px"/> </td> CSS File: ButtonClass { border: thin groove #000000; vertical-align: bottom; color: #000000; background-color: #99FFCC; } The CSS file and the user control reside in the same folder.

    Read the article

  • Chrome not sending POST requests on localhost, Firefox works fine

    - by AP257
    I have copied the simple Django forms example exactly, running on localhost. The basic contact form example should submit a POST request when you click the Submit button. I'm running Chrome on Mac Snow Leopard, and whenever I submit the form, the page simply reloads with an empty form: I can see from the runserver output that it's not sending a POST - instead it's sending a GET request. If I open the same page in Firefox on Mac Snow Leopard, and submit the form, I can see it's sending a POST request (as it should be). Looking at the source in Chrome, the form definitely says method="post". <form action="/contact/" method="post"> <p><label for="id_subject">Subject:</label> <input id="id_subject" type="text" name="subject" maxlength="100" /></p> <p><label for="id_message">Message:</label> <input type="text" name="message" id="id_message" /></p> <p><label for="id_sender">Sender:</label> <input type="text" name="sender" id="id_sender" /></p> <p><label for="id_cc_myself">Cc myself:</label> <input type="checkbox" name="cc_myself" id="id_cc_myself" /></p> <input type="submit" value="Submit" /> </form> External sites with POST forms seem to work OK in Chrome. In addition, if I fill the form in incorrectly, in Chrome the page just reloads, with a GET request, as before; in Firefox the form gets validated, as it should. I've tried with other POST forms on localhost and got the same result. I know Chrome for Mac has its quirks, but what on earth is going on?

    Read the article

  • PHP GD imagecreatefromstring discards transparency

    - by meustrus
    I've been trying to get transparency to work with my application (which dynamically resizes images before storing them) and I think I've finally narrowed down the problem after much misdirection about imagealphablending and imagesavealpha. The source image is never loaded with proper transparency! // With this line, the output image has no transparency (where it should be // transparent, colors bleed out randomly or it's completely black, depending // on the image) $img = imagecreatefromstring($fileData); // With this line, it works as expected. $img = imagecreatefrompng($fileName); // Blah blah blah, lots of image resize code into $img2 goes here; I finally // tried just outputting $img instead. header('Content-Type: image/png'); imagealphablending($img, FALSE); imagesavealpha($img, TRUE); imagepng($img); imagedestroy($img); It would be some serious architectural difficulty to load the image from a file; this code is being used with a JSON API that gets queried from an iPhone app, and it's easier in this case (and more consistent) to upload images as base64-encoded strings in the POST data. Do I absolutely need to somehow store the image as a file (just so that PHP can load it into memory again)? Is there maybe a way to create a Stream from $fileData that can be passed to imagecreatefrompng?

    Read the article

  • Twitter O-Auth Callback url

    - by jtymann
    I am having a problem with Twitter's oauth authentication and using a callback url. I am coding in php and using the sample code referenced by the twitter wiki, http://github.com/abraham/twitteroauth I got that code, and tried a simple test and it worked nicely. However I want to programatically specify the callback url, and the example did not support that. So I quickly modified the getRequestToken() method to take in a parameter and now it looks like this: function getRequestToken($params = array()) { $r = $this->oAuthRequest($this->requestTokenURL(), $params); $token = $this->oAuthParseResponse($r); $this->token = new OAuthConsumer($token['oauth_token'], $token['oauth_token_secret']); return $token; } and my call looks like this $tok = $to->getRequestToken(array('oauth_callback' => 'http://127.0.0.1/twitter_prompt/index.php')); This is the only change I made, and the redirect works like a charm, however I am getting an error when I then try and use my newly granted access to try and make a call. I get a "Could not authenticate you" error. Also the application never actually gets added to the users authorized connections. Now I read the specs and I thought all I had to do was specify the parameter when getting the request token. Could someone a little more seasoned in oauth and twitter possibly give me a hand? Thank You

    Read the article

  • Trying to calculate large numbers in Python with gmpy. Python keeps crashing?

    - by Ryan Peschel
    I was recommended to use gmpy to assist with calculating large numbers efficiently. Before I was just using python and my script ran for a day or two and then ran out of memory (not sure how that happened because my program's memory usage should basically be constant throughout.. maybe a memory leak?) Anyways, I keep getting this weird error after running my program for a couple seconds: mp_allocate< 545275904->545275904 > Fatal Python error: mp_allocate failure This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information. Also, python crashes and Windows 7 gives me the generic python.exe has stopped working dialog. This wasn't happening with using standard python integers. Now that I switch to gmpy I am getting this error just seconds in to running my script. I thought gmpy was specialized in dealing with large number arithmetic? For reference, here is a sample program that produces the error: import gmpy2 p = gmpy2.xmpz(3000000000) s = gmpy2.xmpz(2) M = s**p for x in range(p): s = (s * s) % M I have 10 gigs of RAM and without gmpy this script ran for days without running out of memory (still not sure how that happened considering s never really gets larger.. Anyone have any ideas? EDIT: Forgot to mention I am using Python 3.2

    Read the article

  • activemessaging with stomp and activemq.prefetchSize=1

    - by Clint Miller
    I have a situation where I have a single activemq broker with 2 queues, Q1 and Q2. I have two ruby-based consumers using activemessaging. Let's call them C1 and C2. Both consumers subscribe to each queue. I'm setting activemq.prefetchSize=1 when subscribing to each queue. I'm also setting ack=client. Consider the following sequence of events: 1) A message that triggers a long-running job is published to queue Q1. Call this M1. 2) M1 is dispatched to consumer C1, kicking off a long operation. 3) Two messages that trigger short jobs are published to queue Q2. Call these M2 and M3. 4) M2 is dispatched to C2 which quickly runs the short job. 5) M3 is dispatched to C1, even though C1 is still running M1. It's able to dispatch to C1 because prefetchSize=1 is set on the queue subscription, not on the connection. So the fact that a Q1 message has already been dispatched doesn't stop one Q2 message from being dispatched. Since activemessaging consumers are single-threaded, the net result is that M3 sits and waits on C1 for a long time until C1 finishes processing M1. So, M3 is not processed for a long time, despite the fact that consumer C2 is sitting idle (since it quickly finishes with message M2). Essentially, whenever a long Q1 job is run and then a whole bunch of short Q2 jobs are created, exactly one of the short Q2 jobs gets stuck on a consumer waiting for the long Q1 job to finish. Is there a way to set prefetchSize at the connection level rather than at the subscription level? I really don't want any messages dispatched to C1 while it is processing M1. The other alternative is that I could create a consumer dedicated to processing Q1 and then have other consumers dedicated to processing Q2. But, I'd rather not do that since Q1 messages are infrequent--Q1's dedicated consumers would sit idle most of the day tying up memory.

    Read the article

  • Andriod Spinner not displaying list items.

    - by user300339
    I think I am going crazy right now. I am trying to create a spinner populated by a datatable but for some reason the dropdown list items text is not being displayed. I have looked all over and have seen other posts with people having this same problem. Can anyone help?? speciesList = (Spinner) findViewById(R.id.speciesList); spinnerCursor = nsfdb.fetchAllSpecies(); startManagingCursor(spinnerCursor); //String []cArrayList = new String[]{"dog", "cat", "horse", "other"}; String[] from = new String[]{"species"}; int[] to = new int[]{R.id.text1}; SimpleCursorAdapter locations = new SimpleCursorAdapter(this, R.layout.loc_row, spinnerCursor, from, to); locations.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); speciesList.setAdapter(locations); The spinner gets created just fine and is populated with 4 items but whenever I click on the spinner I see 4 items with no text and just radiobuttons. If I select any of them I am getting the correct selected item value but there is just no data displayed.

    Read the article

  • Using memory-based cache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • How to scale a PHP application (servers, mysql, memcache)

    - by Stéphane Goetz
    Hi, I'm currently creating a website for a social project in switzerland. And before there is an overflow of user, I want to prepare the application to scale. I answered by myself many questions but some are left. I explain what I want to do. First at the beginnning, the Application will have only one server (short time) with DNS, PHP, Mysql, Data, and memcache. Second Then I will split them in two DNS, Mysql, memcache Data, PHP Third Here is the problem, I don't know how to do it exactly here to keep the application running well. I could do : Front : Load Balancer, memcache, DNS Web 1 : PHP, DATA Web 2 : PHP, DATA Mysql This would be the scheme, all PHP sessions are kept in the DB. BUT, how do I sync the data? do I run a Rsync to keep them up to date. do I put them on a separate disk (network disk) to be sure ? but in this case, how can I do in case of user uploads ? and if the website gets more success and we have to go on greater structures, would'nt it create some latency on updates ? or would it be a good thing to go directly to amazon's web services ? some infos I use codeigniter as Framework. I use linux as webserver (distribution not chosen now, but should be Debian) Thanks in advance for your answers.

    Read the article

  • css background image is being cut off..

    - by Ronedog
    I have an unordered list and the background image is being cut off when trying to place it next to the text. I'm using jquery to add the class to the anchor tag to display the image, and its working fine, the only problem is the image gets cut off. I've been playing around with the css, but can't seem to figure out how to make the image display properly...it seems like the < li is hiding the image behind it somehow...can I place the image in front of the < li to make it display...or am I missing something else? Can someone help me? Thanks. Here's the HTML: <ul id="nav> <li> <a class="folder_closed">Item 1</a> </li> </ul> Here's the CSS: ul#nav{ margin-left:0; margin-right:0; padding-left:0px; text-indent:15px; } #nav > li{ vertical-align: top; text-align:left; clear: both; margin-left:0px; margin-right:0px; padding-right:0px; padding-left:15px; } .folder_open{ position:relative; background-image: url(../images/maximize.png); background-repeat: no-repeat; background-position: -5px 1px; } .folder_closed{ position:relative; background-image: url(../images/minimize.png); background-repeat: no-repeat; background-position: -5px 1px; }

    Read the article

  • How can i get rid of 'ORA-01489: result of string concatenation is too long' in this query?

    - by core_pro
    this query gets the dominating sets in a network. so for example given a network A<----->B B<----->C B<----->D C<----->E D<----->C D<----->E F<----->E it returns B,E B,F A,E but it doesn't work for large data because i'm using string methods in my result. i have been trying to remove the string methods and return a view or something but to no avail With t as (select 'A' as per1, 'B' as per2 from dual union all select 'B','C' from dual union all select 'B','D' from dual union all select 'C','B' from dual union all select 'C','E' from dual union all select 'D','C' from dual union all select 'D','E' from dual union all select 'E','C' from dual union all select 'E','D' from dual union all select 'F','E' from dual) ,t2 as (select distinct least(per1, per2) as per1, greatest(per1, per2) as per2 from t union select distinct greatest(per1, per2) as per1, least(per1, per2) as per1 from t) ,t3 as (select per1, per2, row_number() over (partition by per1 order by per2) as rn from t2) ,people as (select per, row_number() over (order by per) rn from (select distinct per1 as per from t union select distinct per2 from t) ) ,comb as (select sys_connect_by_path(per,',')||',' as p from people connect by rn > prior rn ) ,find as (select p, per2, count(*) over (partition by p) as cnt from ( select distinct comb.p, t3.per2 from comb, t3 where instr(comb.p, ','||t3.per1||',') > 0 or instr(comb.p, ','||t3.per2||',') > 0 ) ) ,rnk as (select p, rank() over (order by length(p)) as rnk from find where cnt = (select count(*) from people) order by rnk ) select distinct trim(',' from p) as p from rnk where rnk.rnk = 1`

    Read the article

  • TransactionScope question - how can I keep the DTC from getting involved in this?

    - by larryq
    (I know the circumstances surrounding the DTC and promoting a transaction can be a bit mysterious to those of us not in the know, but let me show you how my company is doing things, and if you can tell me why the DTC is getting involved, and if possible, what I can do to avoid it, I'd be grateful.) I have code running on an ASP.Net webserver. We have one database, SQL 2008. Our data access code looks something like this-- We have a data access layer that uses a wrapper object for SQLConnections and SQLCommands. Typical use looks like this: void method1() { objDataObject = new DataAccessLayer(); objDataObject.Connection = SomeConnectionMethod(); SqlCommand objCommand = DataAccessUtils.CreateCommand(SomeStoredProc); //create some SqlParameters, add them to the objCommand, etc.... objDataObject.BeginTransaction(IsolationLevel.ReadCommitted); objDataObject.ExecuteNonQuery(objCommand); objDataObject.CommitTransaction(); objDataObject.CloseConnection(); } So indeed, a very thin wrapper around SqlClient, SqlConnection etc. I want to run several stored procs in a transaction, and the wrapper class doesn't allow me access to the SqlTransaction so I can't pass it from one component to the next. This led me to use a TransactionScope: using (TransactionScope tx1 = new TransactionScope(TransactionScope.RequiresNew)) { method1(); method2(); method3(); tx1.Complete(); } When I do this, the DTC gets involved, and unfortunately our webservers don't have "allow remote clients" enabled in the MSDTC settings-- getting IT to allow that will be a fight. I'd love to avoid DTC becoming involved but can I do it? Can I leave out the transactional calls in methods1-3() and just let the TransactionScope figure it all out?

    Read the article

  • MySql Query lag time?

    - by Click Upvote
    When there are multiple PHP scripts running in parallel, each making an UPDATE query to the same record in the same table repeatedly, is it possible for there to be a 'lag time' before the table is updated with each query? I have basically 5-6 instances of a PHP script running in parallel, having been launched via cron. Each script gets all the records in the items table, and then loops through them and processes them. However, to avoid processing the same item more than once, I store the id of the last item being processed in a seperate table. So this is how my code works: function getCurrentItem() { $sql = "SELECT currentItemId from settings"; $result = $this->db->query($sql); return $result->get('currentItemId'); } function setCurrentItem($id) { $sql = "UPDATE settings SET currentItemId='$id'"; $this->db->query($sql); } $currentItem = $this->getCurrentItem(); $sql = "SELECT * FROM items WHERE status='pending' AND id > $currentItem'"; $result = $this->db->query($sql); $items = $result->getAll(); foreach ($items as $i) { //Check if $i has been processed by a different instance of the script, and if so, //leave it untouched. if ($this->getCurrentItem() > $i->id) continue; $this->setCurrentItem($i->id); // Process the item here } But despite of all the precautions, most items are being processed more than once. Which makes me think that there is some lag time between the update queries being run by the PHP script, and when the database actually updates the record. Is it true? And if so, what other mechanism should I use to ensure that the PHP scripts always get only the latest currentItemId even when there are multiple scripts running in parrallel? Would using a text file instead of the db help?

    Read the article

  • How to convert model into url properly in asp.net MVC?

    - by 4eburek
    From the SEO standpoint it is nice to see urls in format which explains what is located on a page Let's have a look on such situation (it is just example) We need to display page about some user and decided to have such url template for that page: /user/{UserId}/{UserCountry}/{UserLogin}. And create for this purpose such model public class UserUrlInfo{ public int UserId{get;set;} public string UserCountry{get;set;} public string UserLogin{get;set;} } I want to create controller method where I pass UserUrlInfo object but not all required fields. Classic controller method for url template shown above is following public ActionResult Index(int UserId, string UserCountry, string UserLogin){ return View(); } and we need to call it like that Html.ActionLink<UserController>(x=>Index(user.UserId, user.UserCountry, user.UserLogin), "See user page") I want to create such controller method public ActionResult Index(UserUrlInfo userInfo){ return View(); } and call it like that: Html.ActionLink<UserController>(x=>Index(user), "See user page") Actually I works when we add one more route and point it to the same controller method, so routing will be: /user/{userInfo} /user/{UserId}/{UserCountry}/{UserLogin} In this situation routing engine gets string method of our model (need to override it) and it works ALMOST always. But sometimes it fails and show url like /page/?userInfo=/US/John So my workaround does not always work properly. Does anybody know how to work with urls in such way?

    Read the article

  • Looking for all-in-one drm/installer/CD creation kit.

    - by user30997
    The company I work for has a download manager in place that handles distribution, DRM, installation of our products - when a user gets them off our website. However, we're using an clunky system for packaging and protecting our products when we do press releases or make retail CDs. Part of the antiquation problem is the fact that the automated system that works with the installer- and DRM-creation software we have is a disaster that needs to be put out of my misery. The list of products that we currently produce, that I would like a new system MUST be capable of producing: Retail CDs, with a certain level of obfuscation to make copying difficult. Downloadable installers that time out after a few hours of use of the product. After the time has expired, removing and reinstalling the product will leave you still blocked from use. Installers that will fail to work after a certain date. I'd love to be able to just feed a tool the directory where a complete product resides and have the installer generated with a couple command-line operations. (The command-line issue is non-negotiable this well be called by an automated tool.) A single-solution package would be far preferable. Software with royalty-based or per-unit based licensing is not an option.

    Read the article

  • what does this attempted trojan horse code do?

    - by bstullkid
    It looks like this just sends a ping, but whats the point of that when you can just use ping? /* WARNING: this is someone's attempt at writing a malware trojan. Do not compile and *definitely* don't install. I added an exit as the first line to avoid mishaps - msw */ int main (int argc, char *argv[]) { exit(1); unsigned int pid = 0; char buffer[2]; char *args[] = { "/bin/ping", "-c", "5", NULL, NULL }; if (argc != 2) return 0; args[3] = strdup(argv[1]); for (;;) { gets(buffer); /* FTW */ if (buffer[0] == 0x6e) break; switch (pid = fork()) { case -1: printf("Error Forking\n"); exit(255); case 0: execvp(args[0], args); exit(1); default: break; } } return 255; }

    Read the article

< Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >