Search Results

Search found 1696 results on 68 pages for 'ten ton gorilla'.

Page 57/68 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • perl dynamic path given to 'use lib'

    - by Ed Hyer
    So, my code (Perl scripts and Perl modules) sits in a tree like this: trunk/ util/ process/ scripts/ The 'util' directory has, well, utilities, that things in the 'process/' dir need. They get access like this: use FindBin; use lib "$FindBin::Bin/../util"; use UtilityModule qw(all); That construct doesn't care where you start, as long as you're at the same level in the tree as "util/". But I decided that 'scripts/' was getting too crowded, so I created scripts/scripts1 scripts/scripts2 Now I see that this doesn't work. If I run a script 'trunk/scripts/scripts1/call_script.pl', and it calls '/trunk/process/process_script.pl', then 'process_script.pl' will fail trying to get the routines from UtilityModule(), because the path that FindBin returns is the path of the top-level calling script. The first ten ways I thought of to solve this all involved something like: use lib $path_that_came_from_elsewhere; but that seems to be something Perl doesn't like to do, except via that FindBin trick. I tried some things involving BEGIN{} blocks, but i don't really know what I'm doing there, and will likely just end up refactoring. But if someone has some clever insight into this type of problem, this would be a good chance to earn some points!

    Read the article

  • Lightweight development web server with support for PHP v2

    - by David
    In line with this question: http://stackoverflow.com/questions/171655/lightweight-web-app-server-for-php The above question has been asked numerous times and answered exactly the same in all the cases I've found using google. My question is similar to a degree but with a different desired goal: On demand development instances. I have come up with a somewhat questionable solution to host arbitrary directories in my user account for the purpose of development testing. I am not interested in custom vhosts but looking to emulate the behaviour I get when using paster or mongrel for Python & Ruby respectively. Ubuntu 9.10 TOXIC@~/ APACHE_RUN_USER=$USER APACHE_RUN_GROUP=www-data apache2 -d ~/Desktop/ -c "Listen 2990" Is there a better solution, could I do something similar with nginix or lighttpd? Note: The above won't work correctly for stock environments without a copied & altered httpd.conf. Update: The ideal goal is to mimic Paster, Webbrick, and Mongrel for rapid local development hosting. For those light weight servers, it takes less then a minute to get a working instance running ( not factoring any DB support ). Apache2 vhost is great but I've been using Apache2 for over ten years and it would be some sort of abomination hack to setup a new entry in /etc/hosts unless you have your own DNS, in which case a wildcard subdomain setup would probably work great. EXCEPT one more problem, it's pretty easy for me to know what is being hosted ( ex. by paster or mongeral ) just doing a sudo netstat -tulpn while there would be a good possibility of confusion in figure out which vhost is what.

    Read the article

  • Need help with an AJAX workflow

    - by Anders
    Sorry I couldn't be more descriptive with the title, I will elaborate fully below: I have a web application that I want to implement some AJAX functionality into. Currently, it is running ASP.NET 3.5 with VB.NET codebehind. My current "problem" is I want to dynamically be able to populate a DIV when a user clicks an item on a list. The list item currently contains a HttpUtility.UrlEncode() (ASP.NET) string of the content that should appear in the DIV. Example: <li onclick="setFAQ('The+maximum+number+of+digits+a+patient+account+number+can+contain+is+ten+(10).');"> What is the maximum number of digits a patient account number can contain?</li> I can decode the string partially with the JavaScript function unescape() but it does not fully decode the string. I would much rather pass the JavaScript function the faq ID then somehow pull the information from the database where it originates. I am 99% sure it is impossible to call an ASP function from within a JavaScript function, so I am kind of stumped. I am kind of new to AJAX/ASP.NET so this is a learning experience for me.

    Read the article

  • Cassandra random read speed

    - by Jody Powlette
    We're still evaluating Cassandra for our data store. As a very simple test, I inserted a value for 4 columns into the Keyspace1/Standard1 column family on my local machine amounting to about 100 bytes of data. Then I read it back as fast as I could by row key. I can read it back at 160,000/second. Great. Then I put in a million similar records all with keys in the form of X.Y where X in (1..10) and Y in (1..100,000) and I queried for a random record. Performance fell to 26,000 queries per second. This is still well above the number of queries we need to support (about 1,500/sec) Finally I put ten million records in from 1.1 up through 10.1000000 and randomly queried for one of the 10 million records. Performance is abysmal at 60 queries per second and my disk is thrashing around like crazy. I also verified that if I ask for a subset of the data, say the 1,000 records between 3,000,000 and 3,001,000, it returns slowly at first and then as they cache, it speeds right up to 20,000 queries per second and my disk stops going crazy. I've read all over that people are storing billions of records in Cassandra and fetching them at 5-6k per second, but I can't get anywhere near that with only 10mil records. Any idea what I'm doing wrong? Is there some setting I need to change from the defaults? I'm on an overclocked Core i7 box with 6gigs of ram so I don't think it's the machine. Here's my code to fetch records which I'm spawning into 8 threads to ask for one value from one column via row key: ColumnPath cp = new ColumnPath(); cp.Column_family = "Standard1"; cp.Column = utf8Encoding.GetBytes("site"); string key = (1+sRand.Next(9)) + "." + (1+sRand.Next(1000000)); ColumnOrSuperColumn logline = client.get("Keyspace1", key, cp, ConsistencyLevel.ONE); Thanks for any insights

    Read the article

  • Triangle numbers problem....show within 4 seconds

    - by Daredevil
    The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be: 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ... Let us list the factors of the first seven triangle numbers: 1: 1 3: 1,3 6: 1,2,3,6 10: 1,2,5,10 15: 1,3,5,15 21: 1,3,7,21 28: 1,2,4,7,14,28 We can see that 28 is the first triangle number to have over five divisors. Given an integer n, display the first triangle number having at least n divisors. Sample Input: 5 Output 28 Input Constraints: 1<=n<=320 I was obviously able to do this question, but I used a naive algorithm: Get n. Find triangle numbers and check their number of factors using the mod operator. But the challenge was to show the output within 4 seconds of input. On high inputs like 190 and above it took almost 15-16 seconds. Then I tried to put the triangle numbers and their number of factors in a 2d array first and then get the input from the user and search the array. But somehow I couldn't do it: I got a lot of processor faults. Please try doing it with this method and paste the code. Or if there are any better ways, please tell me.

    Read the article

  • Is there an algorithm to securely split a message into x parts requiring at least y parts to reassem

    - by Aaron
    Is there an algorithm to securely split a message into x parts requiring at least y parts to reassemble? Obviously, y <= x. An example: Say that I have a secret message that I only want to be read in the event of my death. As a way to ensure this, I give a fraction of the message to ten friends. Now, I can't guaranty that all my friends will be able to put their messages together to recover the original. So, I construct each message fraction in such a way so as to only require any 5 friends to put their parts together to reconstruct the whole. However, owning less than 5 parts will not give anything away about the message, except possibly the length. My question is, is this possible? What algorithms might I look at to accomplish this? Clarification edit: The important part of this is the cryptographic strength. An attacker should not be able to recover the message, either in whole or in part with less than y parts.

    Read the article

  • My images aren't updating immediately upon changing their src in javascript

    - by Dale
    I'm using the function below to change the img src. It's an array of ten images. When going through the loop, using break points, the images don't all update on the page immediately. Some of them do. If I inspect the unchanged images on the page (while paused at a breakpoint), the src has changed, but the image hasn't changed yet. All of the unchanged images get updated correctly when the function ends. Anyone know why they don't all get updated instantly and how I can force them to update? Also, is there a way I can hold off the updates of all of them until they're all reassigned and thus have them all update on the "simultaneously"? Here's my function. function mainFunction(){ finalSet = calculateSet(); for ( var int = 0; int < finalSet.length; int++) { var fileName = "cardImg" + (int); document.getElementById(fileName).src = "images/cards/" + finalSet[int].name + ".jpg"; } } Thanks for the help. Dale

    Read the article

  • Most efficient way to check for DBNull and then assign to a variable?

    - by ilitirit
    This question comes up occasionally but I haven't seen a satisfactory answer. A typical pattern is (row is a DataRow): if (row["value"] != DBNull.Value) { someObject.Member = row["value"]; } My first question is which is more efficient (I've flipped the condition): row["value"] == DBNull.Value; // Or row["value"] is DBNull; // Or row["value"].GetType() == typeof(DBNull) // Or... any suggestions? This indicates that .GetType() should be faster, but maybe the compiler knows a few tricks I don't? Second question, is it worth caching the value of row["value"] or does the compiler optimize the indexer away anyway? eg. object valueHolder; if (DBNull.Value == (valueHolder = row["value"])) {} Disclaimers: row["value"] exists. I don't know the column index of the column (hence the column name lookup) I'm asking specifically about checking for DBNull and then assignment (not about premature optimization etc). Edit: I benchmarked a few scenarios (time in seconds, 10000000 trials): row["value"] == DBNull.Value: 00:00:01.5478995 row["value"] is DBNull: 00:00:01.6306578 row["value"].GetType() == typeof(DBNull): 00:00:02.0138757 Object.ReferenceEquals has the same performance as "==" The most interesting result? If you mismatch the name of the column by case (eg. "Value" instead of "value", it takes roughly ten times longer (for a string): row["Value"] == DBNull.Value: 00:00:12.2792374 The moral of the story seems to be that if you can't look up a column by it's index, then ensure that the column name you feed to the indexer matches the DataColumn's name exactly. Caching the value also appears to be nearly twice as fast: No Caching: 00:00:03.0996622 With Caching: 00:00:01.5659920 So the most efficient method seems to be: object temp; string variable; if (DBNull.Value != (temp = row["value"]) { variable = temp.ToString(); } This was a good learning experience.

    Read the article

  • Rate limiting a ruby file stream

    - by Matthew Savage
    I am working on a project which involves uploading flash video files to a S3 bucket from a number of geographically distributed nodes. The video files are about 2-3mb each, and we are only sending one file (per node) every ten minutes, however the bandwidth we consume needs to be rate limited to ~20k/s, as these nodes are delivering streaming media to a CDN, and due to the locations we are only able to get 512k max upload. I have been looking into the ASW-S3 gem and while it doesn't offer any kind of rate limiting I am aware that you can pass in a IO Stream. Given this I am wondering if it might be possible to create a rate-limited stream which overrides the read method, adds in the rate limiting logic (e.g. in its simplest form a call to sleep between reads) and then call out to the super of the overridden method. Another option I considered is hacking the code for Net::HTTP and putting the rate limiting into the send_request_with_body_stream method which is using a while loop, but I'm not entirely sure which would be the best option. I have attempted at extending the IO class, however that didn't work at all, simply inheriting from the class with class ThrottledIO < IO didn't do anything. Any suggestions will be greatly appreciated.

    Read the article

  • How to scan an array for certain information

    - by Andrew Martin
    I've been doing an MSc Software Development conversion course, the main language of which is Java, since the end of September. We have our first assessed practical coming and I was hoping for some guidance. We have to create an array that will store 100 integers (all of which are between 1 and 10), which are generated by a random number generator, and then print out ten numbers of this array per line. For the second part, we need to scan these integers, count up how often each number appears and store the results in a second array. I've done the first bit okay, but I'm confused about how to do the second. I have been looking through the scanner class to see if it has any methods which I could use, but I don't see any. Could anyone point me in the right direction - not the answer, but perhaps which library it comes from? Code so far: import java.util.Random; public class Practical4_Assessed { public static void main(String[] args) { Random numberGenerator = new Random (); int[] arrayOfGenerator = new int[100]; for (int countOfGenerator = 0; countOfGenerator < 100; countOfGenerator++) arrayOfGenerator[countOfGenerator] = numberGenerator.nextInt(10); int countOfNumbersOnLine = 0; for (int countOfOutput = 0; countOfOutput < 100; countOfOutput++) { if (countOfNumbersOnLine == 10) { System.out.println(""); countOfNumbersOnLine = 0; countOfOutput--; } else { System.out.print(arrayOfGenerator[countOfOutput] + " "); countOfNumbersOnLine++; } } } } Thanks, Andrew

    Read the article

  • Looping through recordset with VBA

    - by Robert
    I am trying to assign salespeople (rsSalespeople) to customers (rsCustomers) in a round-robin fashion in the following manner: Navigate to first Customer, assign the first SalesPerson to the Customer. Move to Next Customer. If rsSalesPersons is not at EOF, move to Next SalesPerson; if rsSalesPersons is at EOF, MoveFirst to loop back to the first SalesPerson. Assign this (current) SalesPerson to the (current) Customer. Repeat step 2 until rsCustomers is at EOF (EOF = True, i.e. End-Of-Recordset). It's been awhile since I dealt with VBA, so I'm a bit rusty, but here is what I have come up with, so far: Private Sub Command31_Click() 'On Error GoTo ErrHandler Dim intCustomer As Integer Dim intSalesperson As Integer Dim rsCustomers As DAO.Recordset Dim rsSalespeople As DAO.Recordset Dim strSQL As String strSQL = "SELECT CustomerID, SalespersonID FROM Customers WHERE SalespersonID Is Null" Set rsCustomers = CurrentDb.OpenRecordset(strSQL) strSQL = "SELECT SalespersonID FROM Salespeople" Set rsSalespeople = CurrentDb.OpenRecordset(strSQL) rsCustomers.MoveFirst rsSalespeople.MoveFirst Do While Not rsCustomers.EOF intCustomers = rsCustomers!CustomerID intSalesperson = rsSalespeople!SalespersonID strSQL = "UPDATE Customers SET SalespersonID = " & intSalesperson & " WHERE CustomerID = " & intCustomer DoCmd.RunSQL (strSQL) rsCustomers.MoveNext If Not rsSalespeople.EOF Then rsSalespeople.MoveNext Else rsSalespeople.MoveFirst End If Loop ExitHandler: Set rsCustomers = Nothing Set rsSalespeople = Nothing Exit Sub ErrHandler: MsgBox (Err.Description) Resume ExitHandler End Sub My tables are defined like so: Customers --CustomerID --Name --SalespersonID Salespeople --SalespersonID --Name With ten customers and 5 salespeople, my intended result would like like: CustomerID--Name--SalespersonID 1---A---1 2---B---2 3---C---3 4---D---4 5---E---5 6---F---1 7---G---2 8---H---3 9---I---4 10---J---5 The above code works for the intitial loop through the Salespeople recordset, but errors out when the end of the recordset is found. Regardless of the EOF, it appears it still tries to execute the rsSalespeople.MoveFirst command. Am I not checking for the rsSalespeople.EOF properly? Any ideas to get this code to work?

    Read the article

  • Error when calling SQL SP via LINQ

    - by PaulC
    Newbie problem: I have a SQL SP with ten parameters (eight input, two output) but when I attempt to call it via LINQ from code I get the following error message: "The best overloaded method match for 'DataClassesDataContext.ST_CR_CREATE_CASE_BASIS(string, string, string, string, System.DateTime?, string, string, string, ref int?, ref int?)' has some invalid arguments". The params with ? appear to be unrecognized, but I'm baffled: the data types match the SQL types, the number of parameters match, the other parmeters don't exhibit the same behaviour. Can anyone tell me what's going on? Thanks in advance. -- SQL SP: create procedure ST_CR_CREATE_CASE_BASIS @p_Pers_No nvarchar (50), @p_Subject nvarchar (255), @p_RQ_XML nvarchar(max), @p_RQ_XSL nvarchar(max), @p_Date_Submit smalldatetime, @p_User_ID_Submit nvarchar (255), @p_RQ_Status nvarchar (50), @p_User_ID_OnBehalf nvarchar (255), @p_Case_Number int output, @p_RQ_ID int output as begin -- ... etc.; the SP works fine when called from SSMS The code-behind proc from the aspx page looks like this: protected void cmdSubmit_Click(object sender, EventArgs e) { using (DataClassesDataContext vDataCont = new DataClassesDataContext()) { Int32 vNewCaseNr; Int32 vNewReqNr; DateTime vNow = System.DateTime.Now; vDataCont.ST_CR_CREATE_CASE_BASIS("101", "Test Subject Late Wed", null, null, vNow , "101", "1", "101", ref vNewCaseNr, vNewReqNr); } }

    Read the article

  • Goal setting/tracking packages

    - by Avi
    I'm a developer working by myself. I'm looking for a computerized tool to manage my goals and activities. I own it Microsoft Project, but I don't like it. I've started many "projects" but could never keep on using it. Too complex and heavyweight for me. I use MS-Outlook tasks. They are not what I need. No planning capability. Tracking is not nice. I'm using the Pomodoro technique and I like it, but I'm looking for something more comprehensive and with better computerized support. Something that would allow me to define goals with dependencies and time estimation, keep daily prioritized lists etc. So, I'm looking for a solution. One I've found is GoalPro, but I don't like the fact I could not find a "top ten comparison". Are you using any goal setting package such as GoalPro? Which? Does it help? Pros and Cons?

    Read the article

  • How do I efficiently write a "toggle database value" function in AJAX?

    - by AmbroseChapel
    Say I have a website which shows the user ten images and asks them to categorise each image by clicking on buttons. A button for "funny", a button for "scary", a button for "pretty" and so on. These buttons aren't exclusive. A picture can be both funny and scary. The user clicks the "funny" button. An AJAX request is sent off to the database to mark that image as funny. The "funny" button lights up, by assigning a class in the DOM to mark it as "on". But the user made a mistake. They meant to hit the next button over. They should click "funny" again to turn it off, right? At this point I'm not sure whats the most efficient way to proceed. The database knows that the "funny" flag is set, but it's inefficient to query the database every time a button is clicked to say, is this flag set or not, then go on with a second database call to toggle it. Should I infer the state of the database flag from the DOM, i.e. if that button has the class "on" then the flag must be set, and branch at that point? Or would it be better to have a data structure in Javascript in the page which duplicates the state of each image in the database, so that every time I set the database flag to true, I also set the value in the Javascript data to true and so on?

    Read the article

  • Java library class to handle scheduled execution of "callbacks"?

    - by Hanno Fietz
    My program has a component - dubbed the Scheduler - that lets other components register points in time at which they want to be called back. This should work much like the Unix cron service, i. e. you tell the Scheduler "notify me at ten minutes past every full hour". I realize there are no real callbacks in Java. Here's my approach, is there a library which already does this stuff? Feel free to suggest improvements, too. Register call to Scheduler passes: a time specification containing hour, minute, second, year month, dom, dow, where each item may be unspecified, meaning "execute it every hour / minute etc." (just like crontabs) an object containing data that will tell the calling object what to do when it is notified by the Scheduler. The Scheduler does not process this data, just stores it and passes it back upon notification. a reference to the calling object Upon startup, or after a new registration request, the Scheduler starts with a Calendar object of the current system time and checks if there are any entries in the database that match this point in time. If there are, they are executed and the process starts over. If there aren't, the time in the Calendar object is incremented by one second and the entreis are rechecked. This repeats until there is one entry or more that match(es). (Discrete Event Simulation) The Scheduler will then remember that timestamp, sleep and wake every second to check if it is already there. If it happens to wake up and the time has already passed, it starts over, likewise if the time has come and the jobs have been executed. Edit: Thanks for pointing me to Quartz. I'm looking for something much smaller, however.

    Read the article

  • bash: listing files in date order, with spaces in filenames

    - by Jason Judge
    I am starting with a file containing a list of hundreds of files (full paths) in a random order. I would like to list the details of the ten latest files in that list. This is my naive attempt: ls -las -t `cat list-of-files.txt` | head -10 That works, so long as none of the files have spaces in, but fails if they do as those files are split up at the spaces and treated as separate files. I have tried quoting the files in the original list-of-files file, but the here-document still splits the files up at the spaces in the filenames. The only way I can think of doing this, is to ls each file individually (using xargs perhaps) and create an intermediate file with the file listings and the date in a sortable order as the first field in each line, then sort that intermediate file. However, that feels a bit cumbersome and inefficient (hundreds of ls commands rather than one or two). But that may be the only way to do it? Is there any way to pass "ls" a list of files to process, where those files could contain spaces - it seems like it should be simple, but I'm stumped.

    Read the article

  • CSS Background image in Redmine template arbitrarily not loading

    - by Pekka
    I`m in the process of building a template for Redmine (a project management system based on Ruby on Rails.) Ruby is running on a virtual server from a Bitnami.org installation package. The OS is Windows. The template essentially consists of a styles.css file. In that file, I have the following line: #header { padding: 0px; padding-top: 48px; background-color: #62DFFF; background-image: url(../images/bkg.jpg) background-position: center bottom; background-repeat: repeat-x; height:150px; } It's a header element with a background image. The problem: This background image arbitrarily appears and disappears when reloading. Say you reload ten times in twenty seconds; the image will appear in two instances, and be missing in the 18 others. I would have put this down to server problems, but the weird thing is that when it's missing, the request for the image doesn't appear in Firebug's net tab at all. Even if it were cached, the request should be there. Raw screenshots of the identical page on two reloads: I am 100% sure the CSS file does not change in between. I have examined both instances with Firebug and the CSS is identical. It happens in both Firefox and Chrome so it must be something basic I'm overlooking. What could be causing a browser not to load a resource at all? I have zero idea about Ruby nor Rails - getting Redmine running and customized is all I have ever had to do with this platform - so I don't really know where to look. Apache's, Mongrel's and Redmine's error logs look fine, though.

    Read the article

  • Intermittent SQL Server ODBC Timeout expired

    - by Wili
    We have a bunch of VB6 applications that access two different database servers (both 32-bit windows 2003, one SQL Server 2000, one SQL Server 2005). About every ten minutes or so, we are getting a few errors: [Microsoft][ODBC SQL Server Driver]Timeout expired [Microsoft][ODBC SQL Server Driver][DBNETLIB]SQL Server does not exist or access denied. [Microsoft][ODBC SQL Server Driver]ConnectionRead() This is happening on more than a dozen different computers at random times. We also have IP phones that all run through the same network and those are not having any problems. We can also VNC into a users computer and reproduce the error they were getting, but VNC still continues to work. Email also works. It just seems to be an ODBC connection to SQL Server that causes the issue. The errors happen for both of our SQL Servers. We have scoured google, but haven't been able to come up with a solution. Is there anything we can try to diagnose the problem? Is there any fix out there?

    Read the article

  • Designing a silverlight dashboard with mef - is it possible? (with dynamic loading of xaps)

    - by Tim Robbin
    Hello! I am just trying to wrap my head around MEF. And as I am really going to love it ( I guess ) I started my first sample project and immediatly stumbled into a big problem and now I am asking myself if I can use MEF for my scenario at all and that is the following: Imagine that one got some kind of dashboard with, let's say, five regions and above each region there are two comboboxes. The values in the first combobox represent different possible views (for example, chartControl, tableControl, pictureControl, ...) and the values of the second combobox represents the different data sources for the currently selected control. As the controls are very big in size one wants to download them as needed. If the user selects one comboboxitem the corresponding control xap should be loaded and displayed in this specific region. If the user selectes another control in the same combobox the control should be removed from the visualtree and the next control should be downloaded and displayed. If the user changes the selection in a different combobox the corresponding control should be loaded again only in this specific region, with perhaps different data. And to make it a little more interesting - as this is some kind of dashboard one can change the layout from five regions to - for example - ten regions. I've seen the video "MVVM with MEF in Silverlight Video Tutorial Part 2: Plugins and Metadata" ( http://csharperimage.jeremylikness.com/2010/03/mvvm-with-mef-in-silverlight-video_09.html ) but he is using an ItemsControl and is working with Visibility and he only got ONE region. So I think that this technique is not working for me... Puh, I hope I could make myself clear! Thanks a lot for any piece of information!!! Greetings, Tim.

    Read the article

  • How to publish internal data to the internet - as simple as possible

    - by mlarsen
    We have a .net 2-tier application where a desktop program is talking to a database. We support MS SQL Server 2000, 2005, 2008 and Oracle 9, 10 and 11. The application is sold, not as shrink-wrap, but pretty close. It is quite important for us that the installation and configuration is as easy as possible as installation instructions are usually supplied in written form to the customers internal IT-department. Our application is usually not seen as mission critical for the IT-department, so we need to keep their work down to a minimum. Now we are starting to get wishes for a web application build on top of the same data. The web application will be hosted by us and delivered as a SaaS application. Now the challenge is how to move data back and forth between the web application and the customers internal database. as I see it we have some requirements: We must be ready to handle the situation where the customers database is not accessible from the DMZ. I guess the easiest solution is that all communication is initiated from inside the customers lan. As little firewall configuration as possible. The best is if we can run without any special configuration as long as outgoing traffic from the customers lan are not blocked. If we need something changed in the firewall, we must be able to document that the change is secure. It doesn't have to be real time. Moving data in batches every ten minutes or so is OK. Data moves both ways, but not the same tables, so we don't have to support merges. It would be nice if we don't have to roll our own framework completely. Looking forward to hear your suggestions.

    Read the article

  • Long running operations (threads) in a web (asp.net) environment

    - by rrejc
    I have an asp.net (mvc) web site. As the part of the functions I will have to support some long running operations, for example: Initiated from user: User can upload (xml) file to the server. On the server I need to extract file, do some manipulation (insert into the db) etc... This can take from one minute to ten minutes (or even more - depends on file size). Of course I don't want to block the request when the import is running , but I want to redirect user to some progress page where he will have a chance to watch the status, errors or even cancel the import. This operation will not be frequently used, but it may happen that two users at the same time will try to import the data. It would be nice to run the imports in parallel. At the beginning I was thinking to create a new thread in the iis (controller action) and run the import in a new thread. But I am not sure if this is a good idea (to create working threads on a web server). Should I use windows services or any other approach? Initiated from system: - I will have to periodically update lucene index with the new data. - I will have to send mass emails (in the future). Should I implement this as a job in the site and run the job via Quartz.net or should I also create a windows service or something? What are the best practices when it comes to running site "jobs"? Thanks!

    Read the article

  • Flag bit computation and detection

    - by Majid
    Hi all, In some code I'm working on I should take care of ten independent parameters which can take one of two values (0 or 1). This creates 2^10 distinct conditions. Some of the conditions never occur and can be left out, but those which do occur are still A LOT and making a switch to handle all cases is insane. I want to use 10 if statements instead of a huge switch. For this I know I should use flag bits, or rather flag bytes as the language is javascript and its easier to work with a 10 byte string with to represent a 10-bit binary. Now, my problem is, I don't know how to implement this. I have seen this used in APIs where multiple-selectable options are exposed with numbers 1, 2, 4, 8, ... , n^(n-1) which are decimal equivalents of 1, 10, 100, 1000, etc. in binary. So if we make call like bar = foo(7), bar will be an object with whatever options the three rightmost flags enable. I can convert the decimal number into binary and in each if statement check to see if the corresponding digit is set or not. But I wonder, is there a way to determine the n-th digit of a decimal number is zero or one in binary form, without actually doing the conversion?

    Read the article

  • Execute Stored Procedure from Classic ASP

    - by Jaco Pretorius
    For some fantastic reason I find myself debugging a problem in a Classic ASP page (at least 10 years of my life lost in the last 2 days). I'm trying to execute a stored procedure which contains some OUT parameters. The problem is that one of the OUT parameters is not being populated when the stored procedure returns. I can execute the stored proc from SQL management studio (this is 2008) and all the values are being set and returned exactly as expected. declare @inVar1 varchar(255) declare @inVar2 varchar(255) declare @outVar1 varchar(255) declare @outVar2 varchar(255) SET @inVar2 = 'someValue' exec theStoredProc @inVar1 , @inVar2 , @outVar1 OUT, @outVar2 OUT print '@outVar1=' + @outVar1 print '@outVar2=' + @outVar2 Works great. Fantastic. Perfect. The exact values that I'm expecting are being returned and printed out. Right, since I'm trying to debug a Classic ASP page I copied the code into a VBScript file to try and narrow down the problem. Here is what I came up with: Set Conn = CreateObject("ADODB.Connection") Conn.Open "xxx" Set objCommandSec = CreateObject("ADODB.Command") objCommandSec.ActiveConnection = Conn objCommandSec.CommandType = 4 objCommandSec.CommandText = "theStoredProc " objCommandSec.Parameters.Refresh objCommandSec.Parameters(2) = "someValue" objCommandSec.Execute MsgBox(objCommandSec.Parameters(3)) Doesn't work. Not even a little bit. (Another ten years of my life down the drain) The third parameter is simply NULL - which is what I'm experiencing in the Classic ASP page as well. Could someone shed some light on this? Am I completely daft for thinking that the classic ASP code would be the same as the VBScript code? I think it's using the same scripting engine and syntax so I should be ok, but I'm not 100% sure. The result I'm seeing from my VBScript is the same as I'm seeing in ASP.

    Read the article

  • Python Multiword Index

    - by Manab Chetia
    index = {'Michael': [['mj.com',1], ['Nine.com',9],['i.com', 34]], / 'Jackson': [['One.com',4],['mj.com', 2],['Nine.com', 10], ['i.com', 45]], / 'Thriller' : [['Seven.com', 7], ['Ten.com',10], ['One.com', 5], ['mj.com',3]} # In this dictionary (index), for eg: 'KEYWORD': # [['THE LINK in which KEYWORD is present,'POSITION # of KEYWORD in the page specified by link']] eg: Michael is present in MJ.com, NINE.com, and i.com at positions 1, 9, 34 of respective pages. Please help me with a python procedure which takes index and KEYWORDS as input. When i enter 'MICHAEL'. The result should be: >>['mj.com', 'nine.com', 'i.com'] When I enter 'MICHAEL JACKSON'. The result should be : >>['mj.com', 'Nine.com'] as 'Michael' and 'Jackson' are present at 'mj.com' and 'nine.com' consecutively i.e. in positions (1,2) & (9,10) respectively. The result should not show 'i.com' even though it contains both KEYWORDS but they are not placed consecutively. When I enter 'MICHAEL JACKSON THRILLER', the result should be ['mj.com'] as the 3 words 'MICHAEL', 'JACKSON', 'THRILLER' are placed consecutively in 'mj.com' ie positions (1, 2, 3) respectively. If I enter 'THRILLER JACKSON' or 'THRILLER FEDERER', the result should be NONE.

    Read the article

  • Including/Organzing HTML in large javascript project

    - by Bill Zimmerman
    Hi, I've a got a fairly large web app, with several mini applets on each page. These applets are almost always identical jquery apps. I am looking for advice on how I should organize/include smaller parts of these jquery apps within my larger project. For example, each app has several independent tabs. If possible, I would like to store each of the tabs as a seperate .html file because this makes development easier. My requirements are: 1) All of the html 'tabs' are loaded on the clients end when the page loads. I would like to avoid any delays by dynamically requesting the tab html. 2) If possible, I would like to minimize the raw data sent. For example, it would be preferable to send each tab 1 time, instead of sending each tab 10 times if there are ten applets on that page. Questions: 1) What are my options for 'including' the HTML files / javascript code 2) Any tips for keeping my development simple in this situation? Surely there has to be a better way than just editing one massive html file when working with large pages.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >