Search Results

Search found 31774 results on 1271 pages for 'chris go'.

Page 219/1271 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • How can I run a local Windows Application and have the output be piped into the Browser.

    - by Trey Sherrill
    I have Windows Application (.EXE file is written in C and built with MS-Visual Studio), that outputs ASCII text to stdout. I’m looking to enhance the ASCII text to include limited HTML with a few links. I’d like to invoke this application (.EXE File) and take the output of that application and pipe it into a Browser. This is not a one time thing, each new web page would be another run of the Local Application! The HTML/java-script application below has worked for me to execute the application, but the output has gone into a DOS Box windows and not to pipe it into the Browser. I’d like to update this HTML Application to enable the Browser to capture that text (that is enhanced with HTML) and display it with the browser. <body> <script> function go() { w = new ActiveXObject("WScript.Shell"); w.run('C:/DL/Browser/mk_html.exe'); return true; } </script> <form> Run My Application (Window with explorer only) <input type="button" value="Go" onClick="return go()"> </FORM> </body>

    Read the article

  • Can I spead out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Form values in a list item

    - by Tri
    Here is the site mock-up I'm working on for my job: http://dev.arm.gov/~noensie/dqhands/cgi-bin/explorer. I'm still a novice in web developing and I need help with placing form values in a list item to pass on to another page. I'd rather not go in great detail the purpose of this website, but in terms of its basic use, select the parameters in the middle column for a request and add it to the list on the right column by pressing on "add request" button when it appears. After the list have been populated, a user would submit the selected requests, which will direct them to another page based on the requests selected (or added to the list, same thing). That last sentence is where I'm having a problem. Right now, each of request in the list in the right column are <li> elements and I assign them attribute values that needed to be passed on to the next page. I tried inserting a hidden input with same values, but I'm still not sure how to utilize that; I'm not even sure using the hidden input is the correct way. Also the "submit request" button is located outside the <form> block. I was going to utilize javascript and jQuery to enable the button to serialize the values in the <form> block, but I don't know quite how to do that. Go ahead and take a look at my javascript code (index.js) and slay me, or, I mean, my code. It's still pretty elementary and short (~260 lines, that's short right?). I will take any help for this problem (as the matter of fact, if you see any other problems or a better way of implementation of something, go ahead and mention that too); tips, advice, code samples, or whatever else you can contribute, it will be greatly appreciated. Tri

    Read the article

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • How to feature-detect/test for specific jQuery (and Javascript) methods/functions used

    - by Zildjoms
    good day everyone, hope yer all doin awesome am very new to javascript and jquery, and i (think) i have come up with a simple fade-in/out implementation on a site am workin on (check out http://www.s5ent.com/expandjs.html - if you have the time to check it for inefficiency or what that'd be real sweet). i use the following functions/methods/collections and i would like to do a feature test before using them. uhm.. how? or is there a better way to go about this? jQuery $ .fadeIn([duration]) .fadeOut([duration]) .attr(attributeName,value) .append(content) .each(function(index,Element)) .css(propertyName,value) .hover(handlerIn(eventObject),handlerOut(eventObject)) .stop([clearQueue],[jumpToEnd]) .parent() .eq(index) JavaScript setInterval(expression,timeout) clearInterval(timeoutId) setTimeout(expression,timeout) clearTimeout(timeoutId) i tried looking into jquery.support for the jquery ones, but i find myself running into conceptual problems with it, i.e. for fadein/fadeout, i (think i) should test for $.support.opacity, but that would be false in ie whereas ie6+ could still fairly render the fades. also am using jquery 1.2.6 coz that's enough for what i need. the support object is in 1.3. so i'm hoping to avoid dragging-in more unnecessary code if i can. i also worked with browser sniffing, no matter how frowned-upon. but that's also a bigger problem for me because of non-standard ua strings and spoofing and everything else am not aware of. so how do you guys think i should go about this? or should i even? is there a better way to go about making sure that i don't run code that'll eventually break the page? i've set it up to degrade into a css hover when javascript ain't there.. expertise needed. much appreciated, thanks guyz!

    Read the article

  • DAL Layer : EF 4.0 or Normal Data access layer with Stored Procedure

    - by Harryboy
    Hello Experts, Application : I am working on one mid-large size application which will be used as a product, we need to decide on our DAL layer. Application UI is in Silverlight and DAL layer is going to be behind service layer. We are also moving ahead with domain model, so our DB tables and domain classes are not having same structure. So patterns like Data Mapper and Repository will definitely come into picture. I need to design DAL Layer considering below mentioned factors in priority manner Speed of Development with above average performance Maintenance Future support and stability of the technology Performance Limitation : 1) As we need to strictly go ahead with microsoft, we can not use NHibernate or any other ORM except EF 4.0 2) We can use any code generation tool (Should be Open source or very cheap) but it should only generate code in .Net, so there would not be any licensing issue on per copy basis. Questions I read so many articles about EF 4.0, on outset it looks like that it is still lacking in features from NHibernate but it is considerably better then EF 1.0 So, Do you people feel that we should go ahead with EF 4.0 or we should stick to ADO .Net and use any code geneartion tool like code smith or any other you feel best Also i need to answer questions like what time it will take to port application from EF 4.0 to ADO .Net if in future we stuck up with EF 4.0 for some features or we are having serious performance issue. In reverse case if we go ahead and choose ADO .Net then what time it will take to swith to EF 4.0 Lastly..as i was going through the article i found the code only approach (with POCO classes) seems to be best suited for our requirement as switching is really easy from one technology to other. Please share your thoughts on the same and please guide on the above questions

    Read the article

  • Avoiding anemic domain model - a real example

    - by cbp
    I am trying to understand Anemic Domain Models and why they are supposedly an anti-pattern. Here is a real world example. I have an Employee class, which has a ton of properties - name, gender, username, etc public class Employee { public string Name { get; set; } public string Gender { get; set; } public string Username { get; set; } // Etc.. mostly getters and setters } Next we have a system that involves rotating incoming phone calls and website enquiries (known as 'leads') evenly amongst sales staff. This system is quite complex as it involves round-robining enquiries, checking for holidays, employee preferences etc. So this system is currently seperated out into a service: EmployeeLeadRotationService. public class EmployeeLeadRotationService : IEmployeeLeadRotationService { private IEmployeeRepository _employeeRepository; // ...plus lots of other injected repositories and services public void SelectEmployee(ILead lead) { // Etc. lots of complex logic } } Then on the backside of our website enquiry form we have code like this: public void SubmitForm() { var lead = CreateLeadFromFormInput(); var selectedEmployee = Kernel.Get<IEmployeeLeadRotationService>() .SelectEmployee(lead); Response.Write(employee.Name + " will handle your enquiry. Thanks."); } I don't really encounter many problems with this approach, but supposedly this is something that I should run screaming from because it is an Anemic Domain Model. But for me its not clear where the logic in the lead rotation service should go. Should it go in the lead? Should it go in the employee? What about all the injected repositories etc that the rotation service requires - how would they be injected into the employee, given that most of the time when dealing with an employee we don't need any of these repositories?

    Read the article

  • ECM (Niche Vs Mass Market)

    - by Luj Reyes
    Hi Everyone, I recently started a little company with a couple of guys. Ours is the typical startup, a lot of ideas, dreams, talent and work hours :P. Our initial business plan was to develop a DM (Document Manager) with several features found on DropBox and other tools but with a big differentiator. Then we got in the team this Business Guy (I must say that several of us could be called 'Business Guys' but we are mainly hackers, he is just Another 'Networking Guy'), and along with him came this market analysis for a DM aimed at a very specific and narrow niche. We have many elements to believe in his market study and the idea is the classic "The market is X million, so if we grab a 10%...", and the market is really there to grab because all big providers deemed it too little and fled, let's say that the market is 5 million USD and demand very specific features. If we decide to go for this niche product we face a sales cycle of about 7 months, and the main goal of these revenue is to develop more ambitious projects. (Institutional VC is out of the question if you want to keep a marginal ownership of your company in my country). The only overlap between the niche and the mass market product features is the ability to store documents; everything else requires that we focus all of our efforts towards one or the other. I've studied a lot about the differences between Mass and Niche Markets, but I want to hear from people with actual experience. So everything comes down to this: If you have a really “saleable” idea what is the right thing to do: to go for the niche or go for the big prize and target primarily the mass market? Thanks for your input

    Read the article

  • How to combine twill and python into one code that could be run on "Google App Engine"?

    - by brilliant
    Hello everybody!!! I have installed twill on my computer (having previously installed Python 2.5) and have been using it recently. Python is installed on disk C on my computer: C:\Python25 And the twill folder (“twill-0.9”) is located here: E:\tmp\twill-0.9 Here is a code that I’ve been using in twill: go “some website’s sign-in page URL” formvalue 2 userid “my login” formvalue 2 pass “my password” submit go “URL of some other page from that website” save_html result.txt This code helps me to log in to one website, in which I have an account, record the HTML code of some other page of that website (that I can access only after logging in), and store it in a file named “result.txt” (of course, before using this code I firstly need to replace “my login” with my real login, “my password” with my real password, “some website’s sign-in page URL” and “URL of some other page from that website” with real URLs of that website, and number 2 with the number of the form on that website that is used as a sign-in form on that website’s log-in page) This code I store in “test.twill” file that is located in my “twill-0.9” folder: E:\tmp\twill-0.9\test.twill I run this file from my command prompt: python twill-sh test.twill Now, I also have installed “Google App Engine SDK” from “Google App Engine” and have also been using it for awhile. For example, I’ve been using this code: import hashlib m = hashlib.md5() m.update("Nobody inspects") m.update(" the spammish repetition ") print m.hexdigest() This code helps me transform the phrase “Nobody inspects the spammish repetition” into md5 digest. Now, how can I put these two pieces of code together into one python script that I could run on “Google App Engine”? Let’s say, I want my code to log in to a website from “Google App Engine”, go to another page on that website, record its HTML code (that’s what my twill code does) and than transform this HTML code into its md5 digest (that’s what my second code does). So, how can I combine those two codes into one python code? I guess, it should be done somehow by importing twill, but how can it be done? Can a python code - the one that is being run by “Google App Engine” - import twill from somewhere on the internet? Or, perhaps, twill is already installed on “Google App Engine”?

    Read the article

  • How best to implement "favourites" feature? (like favourite products on a data driven website)

    - by ClarkeyBoy
    Hi, I have written a dynamic database driven, object oriented website with an administration frontend etc etc. I would like to add a feature where customers can save items as "favourites", without having to create an account and login, to come back to them later, but I dont know how exactly to go about doing this... I see three options: Log favourites based on IP address and then change these to be logged against an account if the customer then creates an account; Force customers to create an account to be able to use this functionality; Log favourites based on IP address but give users the option to save their favourites under a name they specify. The problem with option 1 is that I dont know much about IP addresses - my Dad thinks they are unique, but I know people have had problems with systems like this. The problem with 1 and 2 is that accounts have not been opened up to customers yet - only administrators can log in at the moment. It should be easy to alter this (no more than a morning or afternoons work) but I would also have to implement usergroups too. The problem with option 3 is that if user A saves a favourites list called "My Favourites", and then user B tries to save a list under this name and it is refused, user B will then be able to access the list saved by user A because they now know it already exists. A solution to this is to password protect lists, but to go to all this effort I may as well implement option 2. Of course I could always use option 4; use an alternative if anyone can suggest a better solution than any of the above options. So has anyone ever done something like this before? If so how did you go about it? What do you recommend (or not recommend)? Many thanks in advance, Regards, Richard

    Read the article

  • SQL Alchemy MVC and cross controller joins

    - by Khorkrak
    When using SQL Alchemy for abstracting your data access layer and using controllers as the way to access objects from that abstraction layer, how should joins be handled? So for example, say you have an Orders controller class that manages Order objects such that it provides getOrder, saveOrder, etc methods and likewise a similar controller for User objects. First of all do you even need these controllers? Should you instead just treat SQL Alchemy as "the" thing for handling data access. Why bother with object oriented controller stuff there when you instead have a clean declarative way to obtain and persist objects without having to write SQL directly either. Well one reason could be that perhaps you may want to replace SQL Alchemy with direct SQL or Storm or whatever else. So having controller classes there to act as an intermediate layer helps limit what would need to change then. Anyway - back to the main question - so assuming you have these two controllers, now lets say you want the list of orders for a certain set of users meeting some criteria. How do you go about doing this? Generally you don't want the controllers crossing domains - the Orders controllers knows only about Orders and the User controller just about Users - they don't mess with each other. You also don't want to go fetch all the Users that match and then feed a big list of user ids to the Orders controller to go find the matching Orders. What's needed is a join. Here's where I'm stuck - that seems to mean either the controllers must cross domains or perhaps they should be done away with altogether and you simply do the join via SQL Alchemy directly and get the resulting User and / or Order objects as needed. Thoughts?

    Read the article

  • Need Help with wxPython & pyGame

    - by Xavier
    Hi Guys, I'm actually in need of your help and advice here on my assignment that I am working on. First of all, I was task to do a program that runs langton's ant simulation. For that, I've managed to get the source code (from snippets.dzone.com/posts/show/5143) and edited it accordingly to my requirements. This was done and ran in pygame module extension. In addition, my task requires a GUI to interface for users to run and navigate through the screens effectively with the langton's ant program running. I used wxPython with the help of an IDE called BOA constructor to create the frames, buttons, textboxes, etc. Basically, all the stuff needed in the interfaces. However, I've ran into some problems listed below: Found problem integrating pyGame with wxPython. On this note, I've research the internet for answers and tutorials where I found out from website: wiki.wxpython.org/IntegratingPyGame & aspn.activestate.com/ASPN/Mail/Message/wxpython-users/3178042. I understand from the sites that integrating pyGame with wxPython will be a difficult task where it has caused common problems like the inability to placing other controls into the frames as the pyGame application will cover the entire panel. I really hope that you can clarify my doubts on this and advise me on the path that I should take from here. Therefore, I ask the following questions: Is it feasible to integrate pyGame with wxPython? If it is not feasible to integrate pyGame with wxPython, what other alternatives do I have to create a GUI interface integrating pyGame into it. If so how do I go about? If it is possible to go about integrating pyGame with wxPython, how do I go about doing so? Really need you guys opinion on this.

    Read the article

  • SQL Server 2008 - Shrinking the Transaction Log - Any way to automate?

    - by Albert
    I went in and checked my Transaction log the other day and it was something crazy like 15GB. I ran the following code: USE mydb GO BACKUP LOG mydb WITH TRUNCATE_ONLY GO DBCC SHRINKFILE(mydb_log,8) GO Which worked fine, shrank it down to 8MB...but the DB in question is a Log Shipping Publisher, and the log is already back up to some 500MB and growing quick. Is there any way to automate this log shrinking, outside of creating a custom "Execute T-SQL Statement Task" Maintenance Plan Task, and hooking it on to my log backup task? If that's the best way then fine...but I was just thinking that SQL Server would have a better way of dealing with this. I thought it was supposed to shrink automatically whenever you took a log backup, but that's not happening (perhaps because of my log shipping, I don't know). Here's my current backup plan: Full backups every night Transaction log backups once a day, late morning (maybe hook the Log shrinking onto this...doesn't need to be shrank every day though) Or maybe I just run it once a week, after I run a full backup task? What do you all think?

    Read the article

  • Load Spikes on a Apache MySQL Server with Wordpress MU

    - by Vikram Goyal
    Hi there, I am trying to investigate the reasons for some mysterious load spikes on a Linux Apache server (2.2.14) running PHP 5.2.9 on a dedicated server with enough processing power and memory. My primary web application is a Wordpress MU (2.9.2) installation. I have investigated and ruled out DOS attack, MySQL or Apache configuration issues. The log files don't give me anything of interest, except to tell me that there is severe load. The load (which can go up to 100) just seems to come and go. It helps that I have a script that checks every 3 minutes for the load, and restarts Apache. Restarting it helps, and the server comes back, till it happens again. There seems to be no set time frame, or visitor numbers on the site that can trigger this. Even a low number of concurrent visitors (20) can trigger it. I am almost convinced that there is a rewrite loop somewhere that is causing Apache to go mad. Apache is trying to serve something that is causing it to spawn more and more processes till it keels over. My question is: Given that I am convinced that this is a rewrite issue or something similar, how can I try and figure out what the issue is? What should I monitor? Apache logs are voluminous, and not very helpful. Of course, if this is not the issue, then at least knowing what to look for will help me eliminate this as an issue and look for something else. Thanks! Vikram

    Read the article

  • Reversible pseudo-random sequence generator

    - by user350651
    I would like some sort of method to create a fairly long sequence of random numbers that I can flip through backwards and forwards. Like a machine with "next" and "previous" buttons, that will give you random numbers. Something like 10-bit resolution (i.e. positive integers in a range from 0 to 1023) is enough, and a sequence of 100k numbers. It's for a simple game-type app, I don't need encryption-strength randomness or anything, but I want it to feel fairly random. I have a limited amount of memory available though, so I can't just generate a chunk of random data and go through it. I need to get the numbers in "interactive time" - I can easily spend a few ms thinking about the next number, but not comfortably much more than that. Eventually it will run on some sort of microcontroller, probably just an Arduino. I could do it with a simple linear congruential generator (LCG). Going forwards is simple, to go backwards I'd have to cache the most recent numbers and store some points at intervals so I can recreate the sequence from there. But maybe there IS some pseudo-random generator that allows you to go both forwards and forwards? It should be possible to hook up two linear feedback shift registers (LFSRs) to roll in different directions, no? Or maybe I can just get by with garbling the index number using a hash function of some sort? I'm going to try that first. Any other ideas?

    Read the article

  • onTouchListener togglebutton, ignores first press?

    - by Paul
    I have a togglebutton that should run code when I press it down and more code when I let go. However the first time I press and let go nothing happens. Every other time it is fine, why is this? I can see the method only runs the first time when I let go of the button (it does not trigger any onTouch part of the method though), how can I get around this, and have it work for the first press? public void pushtotalk3(final View view) { ((ToggleButton) view).setChecked(true); ((ToggleButton) view).setChecked(false); view.setOnTouchListener(new OnTouchListener() { @Override public boolean onTouch(View v, MotionEvent event) { //if more than one call, change this code int callId = 0; for (SipCallSession callInfo : callsInfo) { callId = callInfo.getCallId(); Log.e(TAG, "" + callInfo.getCallId()); } final int id = callId; switch (event.getAction()) { case MotionEvent.ACTION_DOWN: { //press ((ToggleButton) view).setBackgroundResource(R.drawable.btn_blue_glossy); ((ToggleButton) view).setChecked(true); OnDtmf(id, 17, 10); OnDtmf(id, 16, 9); return true; } case MotionEvent.ACTION_UP: { //release ((ToggleButton) view).setBackgroundResource(R.drawable.btn_lightblue_glossy); ((ToggleButton) view).setChecked(false); OnDtmf(id, 18, 11); OnDtmf(id, 18, 11); return true; } default: return false; } } }); } EDIT: the xml for the button: <ToggleButton android:id="@+id/PTT_button5" android:layout_width="0dp" android:layout_height="fill_parent" android:text="@string/ptt5" android:onClick="pushtotalk5" android:layout_weight="50" android:textOn="Push To Talk On" android:textOff="Push To Talk Off" android:background="@drawable/btn_lightblue_glossy" android:textColor="@android:color/white" android:textSize="15sp" /> EDIT: hardware problem, can't test solutions atm.

    Read the article

  • Redirecting from an update action to the referrer of the edit

    - by Mark Westling
    My Rails 2.3 application has a User model and the usual controller actions. The edit form can be reached two ways: when a user edits his own profile from the home page, or when an admin user edits someone else's profile from users collection. What I'd like to do is have the update action redirect back to the referred of the edit action, not the update action. If I do a simple redirect_to(:back) within update, it goes back to the edit form -- not good. One solution is to forget entirely about referrers and redirect based on the current_user and the updated user: if they're the same, go back to the home page, else go to the users collection page. This will break if I ever add a third path to the edit form. It's doubtful I'll ever do this but I'd prefer a solution that's not so brittle. Another solution is to store the referrer of edit form in a hidden field and then redirect to this value from inside the update action. This doesn't feel quite right, though I can't explain why. Are there any better approaches? Or, should I stop worrying and go with one of the two I've mentioned?

    Read the article

  • How can I execute a .sql from C#?

    - by J. Pablo Fernández
    For some integration tests I want to connect to the database and run a .sql file that has the schema needed for the tests to actually run, including GO statements. How can I execute the .sql file? (or is this totally the wrong way to go?) I've found a post in the MSDN forum showing this code: using System.Data.SqlClient; using System.IO; using Microsoft.SqlServer.Management.Common; using Microsoft.SqlServer.Management.Smo; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { string sqlConnectionString = "Data Source=(local);Initial Catalog=AdventureWorks;Integrated Security=True"; FileInfo file = new FileInfo("C:\\myscript.sql"); string script = file.OpenText().ReadToEnd(); SqlConnection conn = new SqlConnection(sqlConnectionString); Server server = new Server(new ServerConnection(conn)); server.ConnectionContext.ExecuteNonQuery(script); } } } but on the last line I'm getting this error: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. --- System.TypeInitializationException: The type initializer for '' threw an exception. --- .ModuleLoadException: The C++ module failed to load during appdomain initialization. --- System.DllNotFoundException: Unable to load DLL 'MSVCR80.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E). I was told to go and download that DLL from somewhere, but that sounds very hacky. Is there a cleaner way to? Is there another way to do it? What am I doing wrong? I'm doing this with Visual Studio 2008, SQL Server 2008, .Net 3.5SP1 and C# 3.0.

    Read the article

  • Sybase stored procedure - how do I create an index on a #table?

    - by DVK
    I have a stored procedure which creates and works with a temporary #table Some of the queries would be tremendously optimized if that temporary #table would have an index created on it. However, creating an index within the stored procedure fails: create procedure test1 as SELECT f1, f2, f3 INTO #table1 FROM main_table WHERE 1 = 2 -- insert rows into #table1 create index my_idx on #table1 (f1) SELECT f1, f2, f3 FROM #table1 (index my_idx) WHERE f1 = 11 -- "QUERY X" When I call the above, the query plan for "QUERY X" shows a table scan. If I simply run the code above outside the stored procedure, the messages show the following warning: Index 'my_idx' specified as optimizer hint in the FROM clause of table '#table1' does not exist. Optimizer will choose another index instead. This can be resolved when running ad-hoc (outside the stored procedure) by splitting the code above in two batches by addding "go" after index creation: create index my_idx on #table1 (f1) go Now, "QUERY X" query plan shows the use of index "my_idx". QUESTION: How do I mimique running the "create index" in a separate batch when it's inside the stored procedure? I can't insert a "go" there like I do with the ad-hoc copy above. P.S. If it matters, this is on Sybase 12.

    Read the article

  • Building Paypal based membership website - total noob - would appreciate help

    - by Ali
    this is a follow up on my question on paypal integration. I'm working ona membership site for racing fans. My membership site has 3 membership levels - free, gold and premium. When a user signs up he/she can gets a free membership on the spot but has the option to upgrade to a gold membership for 4 Dollars a month or a premium membership for 10 Dollars a month. I've gone through the paypal integration guide a few times though and have a vague understanding of how to get this to work. I think the recurring payments option would be fine enough - however I don't know how do I implement this in my system. Like when a user decides to go for a paid account i.e. Gold or premium from basic - what should I do on both my code side and on the paypal account side - I'd really appreciate if anyone would outline what I'd have to do here. Plus when a user decides to upgrade from lets say a Gold to a premium account - there is the issue of computing how much should be charged to upgrade his/her account eg: a user has been billed for 4 dollars and the next day opts to go for a premium account so assuming that the surplus for the rest of the month is 5 dollars and further from that all payments would be recurring 10 dollars monthly - how do I implement this? And in case a user decides to downgrade from a premium account of 10 dollars a month to a gold account of 4 dollars a month - how do I handle the surplus which would have to be refunded for that month alone and changing the membership? And like wise if someone wishes to cancel membership and go to having a free account - how do I refund whatever is owed and cancel the subscription. I'm sorry if it sounds like I'm asking to be spoon fed :( I'm quite new to this and this is for a client and I would really appreciate all the help here and really have to get this working right. Thanks again everyone - waiting for all your replies.

    Read the article

  • Blackberry Keyboard Lock timeout

    - by Vernon
    I want this blackberry 9700 to "fully lock" as soon as I click the icon for the "Keyboard Lock" application. Currently I have to wait 5 to 7 seconds for the screen to go dark after each time I click the "Keyboard Lock" icon. During that time if something touches the touch pad, then the 5-7 second timer resets and you have to wait another 5 to 7 seconds for the screen to go dark and "fully lock" After it finally goes dark, touching the touch pad does not reset the timer. At that point it is "fully locked" and requires a key to be pressed. How can I get it to "fully lock" as soon as the lock icon is clicked? I want the screen to go dark immediately, and for it to require a key press to request an unlock. I have tried Options - Screen/Keyboard - Backlight Timeout ... etc ... none of that reduces the timeout for the "Keyboard Lock" application. And there does not seem to be an option screen for the "Keyboard Lock" application, that I can find. NOTE: This is occurring with BlackBerry 9700 v5.0.0.330 (Platform 5.1.0.91)

    Read the article

  • Equivalent to window.setTimeout() for C++

    - by bobobobo
    In javascript there's this sweet, sweet function window.setTimeout( func, 1000 ) ; which will asynchronously invoke func after 1000 ms. I want to do something similar in C++ (without multithreading), so I put together a sample loop like: #include <stdio.h> struct Callback { // The _time_ this function will be executed. double execTime ; // The function to execute after execTime has passed void* func ; } ; // Sample function to execute void go() { puts( "GO" ) ; } // Global program-wide sense of time double time ; int main() { // start the timer time = 0 ; // Make a sample callback Callback c1 ; c1.execTime = 10000 ; c1.func = go ; while( 1 ) { // its time to execute it if( time c1.execTime ) { c1.func ; // !! doesn't work! } time++; } } How can I make something like this work?

    Read the article

  • Building Stored Procedure to group data into ranges with roughly equal results in each bucket

    - by Len
    I am trying to build one procedure to take a large amount of data and create 5 range buckets to display the data. the buckets ranges will have to be set according to the results. Here is my existing SP GO /****** Object: StoredProcedure [dbo].[sp_GetRangeCounts] Script Date: 03/28/2010 19:50:45 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_GetRangeCounts] @idMenu int AS declare @myMin decimal(19,2), @myMax decimal(19,2), @myDif decimal(19,2), @range1 decimal(19,2), @range2 decimal(19,2), @range3 decimal(19,2), @range4 decimal(19,2), @range5 decimal(19,2), @range6 decimal(19,2) SELECT @myMin=Min(modelpropvalue), @myMax=Max(modelpropvalue) FROM xmodelpropertyvalues where modelPropUnitDescriptionID=@idMenu set @myDif=(@myMax-@myMin)/5 set @range1=@myMin set @range2=@myMin+@myDif set @range3=@range2+@myDif set @range4=@range3+@myDif set @range5=@range4+@myDif set @range6=@range5+@myDif select @myMin,@myMax,@myDif,@range1,@range2,@range3,@range4,@range5,@range6 select t.range as myRange, count(*) as myCount from ( select case when modelpropvalue between @range1 and @range2 then 'range1' when modelpropvalue between @range2 and @range3 then 'range2' when modelpropvalue between @range3 and @range4 then 'range3' when modelpropvalue between @range4 and @range5 then 'range4' when modelpropvalue between @range5 and @range6 then 'range5' end as range from xmodelpropertyvalues where modelpropunitDescriptionID=@idmenu) t group by t.range order by t.range This calculates the min and max value from my table, works out the difference between the two and creates 5 buckets. The problem is that if there are a small amount of very high (or very low) values then the buckets will appear very distorted - as in these results... range1 2806 range2 296 range3 75 range5 1 Basically I want to rebuild the SP so it creates buckets with equal amounts of results in each. I have played around with some of the following approaches without quite nailing it... SELECT modelpropvalue, NTILE(5) OVER (ORDER BY modelpropvalue) FROM xmodelpropertyvalues - this creates a new column with either 1,2,3,4 or 5 in it ROW_NUMBER()OVER (ORDER BY modelpropvalue) between @range1 and @range2 ROW_NUMBER()OVER (ORDER BY modelpropvalue) between @range2 and @range3 or maybe i could allocate every record a row number then divide into ranges from this?

    Read the article

  • Heterogeneous queries require the ANSI_NULLS

    - by Dezigo
    I wrote a trigger. USE [TEST] GO /****** Object: Trigger [dbo].[TR_POSTGRESQL_UPDATE_YC] Script Date: 05/26/2010 08:54:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER TRIGGER [dbo].[TR_POSTGRESQL_UPDATE_YC] ON [dbo].[BCT_CNTR_EVENTS] FOR INSERT AS BEGIN DECLARE @MOVE_TIME varchar(14); DECLARE @MOVE_TIME_FORMATED varchar(20); DECLARE @RELEASE_NOTE varchar(32); DECLARE @CMR_NUMBER varchar(15); DECLARE @MOVE_TYPE varchar(2); SELECT @MOVE_TIME = inserted.move_time ,@MOVE_TYPE = inserted.move_type ,@RELEASE_NOTE = inserted.release_note ,@CMR_NUMBER = inserted.cmr_number FROM inserted IF(@MOVE_TYPE = 'YC') BEGIN SET @MOVE_TIME_FORMATED = SUBSTRING(@MOVE_TIME,1,4) + '-' + SUBSTRING(@MOVE_TIME,5,2) + '-' + SUBSTRING(@MOVE_TIME,7,2) + ' 00:00:00' --UPDATE OpenQuery(POSTGRESQL_SERV,'SELECT visit_cmr,visit_timestamp,visit_pin FROM VISIT') -- SET visit_cmr = @RELEASE_NOTE -- WHERE visit_timestamp = @MOVE_TIME_FORMATED -- AND visit_pin = right(@CMR_NUMBER,5) -- AND visit_cmr IS NULL END SET NOCOUNT ON; END When I have inserted a row,I have get an error **Heterogeneous queries require the ANSI_NULLS and ANSI_WARNINGS options to be set for the connection. This ensures consistent query semantics. Enable these options and then reissue your query.** Then I ofcourse SET SET ANSI_WARNINGS is ON but it`s not work for me. (trigger fo linked server postgresql) I have restarted a server. not work again.

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >