Search Results

Search found 14236 results on 570 pages for 'times square'.

Page 130/570 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • Can I get the amount of time for which a key is pressed on a keyboard

    - by Adi
    Dear all, I am working on a project in which I have to develop bio-passwords based on user's keystroke style. Suppose a user types a password for 20 times, his keystrokes are recorded, like holdtime : time for which a particular key is pressed. digraph time : time it takes to press a different key. suppose a user types a password " COMPUTER". I need to know the time for which every key is pressed. something like : holdtime for the above password is C-- 200ms O-- 130ms M-- 150ms P-- 175ms U-- 320ms T-- 230ms E-- 120ms R-- 300ms The rational behind this is , every user will have a different holdtime. Say a old person is typing the password, he will take more time then a student. And it will be unique to a particular person. To do this project, I need to record the time for each key pressed. I would greatly appreciate if anyone can guide me in how to get these times. Editing from here.. Language is not important, but I would prefer it in C. I am more interested in getting the dataset.

    Read the article

  • Using JMX classes to notify on events over time

    - by Cincinnati Joe
    I've been looking at JMX for monitoring application and system metrics (partially because MBeans can accessed by various tools such as JConsole). It would seem like the classes included with JMX would be useful for things like notification when metrics have exceeded thresholds. But I'm not sure they fit with the way I want to measure these over a specified time period. For example, let's say I want to notify an admin when the average CPU load is over 95% for more than 5 minutes. Is that something can be done with a GaugeMonitor? From the docs, it doesn't seem quite suited for this, and I'm wondering if instead I should write my own MBean with the necessary logic. A more relevant example is when the login times for users exceed 10s over a period of 5 mins. Slightly different would be the last 20 logins took more than 10s on average. Another case would be when a process crashes 4+ times in an hour. Or the request queue exceeds 15 for 5 mins. Are the JMX Monitor classes useful for this kind of thing?

    Read the article

  • Load PHP function with jQuery Ajax

    - by brandon14_99
    I have a file which is loaded at the top of my document, which is called Videos.php. Inside that file are several functions, such as getYoutubeVideos. On some pages, I need to call upon that function several times (up to 50), and it of course creates major lag on load times. So I have been trying to figure out how to call that function in, only when it is need (when someone clicks the show videos button). I have very little experience with jQuery's ajax abilities. I would like the ajax call to be made inside of something like this: jQuery('a[rel=VideoPreview1).click(function(){ jQuery ("a[rel=VideoPreview1]").hide(); jQuery ("a[rel=HideVideoPreview1]").show(); jQuery ("#VideoPreview1").show(); //AJAX STUFF HERE preventDefault(); }); Ok I have created this based on the responses, but it is still not working: jQuery Code: jQuery(document).ready(function(){ jQuery("a[rel=VideoPreview5]").click(function(){ jQuery("a[rel=VideoPreview5]").hide(); jQuery("a[rel=HideVideoPreview5]").show(); jQuery.post("/Classes/Video.php", {action: "getYoutubeVideos", artist: "Train", track: "Hey, Soul Sister"}, function(data){ jQuery("#VideoPreview5").html(data); }, 'json'); jQuery("#VideoPreview5").show(); preventDefault(); }); jQuery("a[rel=HideVideoPreview5]").click(function(){ jQuery("a[rel=VideoPreview5]").show(); jQuery("a[rel=HideVideoPreview5]").hide(); jQuery("#VideoPreview5").hide(); preventDefault(); }); }); And the PHP code: $Action = isset($_POST['action']); $Artist = isset($_POST['artist']); $Track = isset($_POST['track']); if($Action == 'getYoutubeVideos') { echo 'where are the videos'; echo json_encode(getYoutubeVideos($Artist.' '.$Track, 1, 5, 'relevance')); }

    Read the article

  • Java reflection appropriateness

    - by jsn
    This may be a fairly subjective question, but maybe not. My application contains a bunch of forms that are displayed to the user at different times. Each form is a class of its own. Typically the user clicks a button, which launches a new form. I have a convenience function that builds these buttons, you call it like this: buildButton( "button text", new SelectionAdapter() { @Override public void widgetSelected( SelectionEvent e ) { showForm( new TasksForm( args... ) ); } } ); I do this dozens of times, and it's really cumbersome having to make a SelectionAdapter every time. Really all I need for the button to know is what class to instantiate when it's clicked and what arguments to give the constructor, so I built a function that I call like this instead: buildButton( "button text", TasksForm.class, args... ); Where args is an arbitrary list of objects that you could use to instantiate TasksForm normally. It uses reflection to get a constructor from the class, match the argument list, and build an instance when it needs to. Most of the time I don't have to pass any arguments to the constructor at all. The downside is obviously that if I'm passing a bad set of arguments, it can't detect that at compilation time, so if it fails, a dialog is displayed at runtime. But it won't normally fail, and it'll be easy to debug if it does. I think this is much cleaner because I come from languages where the use of function and class literals is pretty common. But if you're a normal Java programmer, would seeing this freak you out, or would you appreciate not having to scan a zillion SelectionAdapters?

    Read the article

  • I can't login to my Django app when debug is set to False

    - by Eric
    I have a very strange problem, and I don't know how to fix or debug it. Short Story: I get locked out of my Django app when Debug is set to False. Long story: Case 1 (the first time it happened): 1. I enter my login info, but It just redirects to the login page. 2. I restart the server, try to login, and it works fine, I get in. 3. a few hours later I come back, log out, try to log back in and I can't. It just redirects to the login page. Case 2 (I figure out how to provoke the login failure): 1. I restart the server and am able to login to the site. 2. I log in and log out several times, everything is fine. 3. I go to a non-existing page and get a server error. 4. I log out and try to log back in, and I can't, just get redirected back to the login page. Case 3 (I can't provoke the login failure with Debug set to True): 1. I restart the server and am able to login to the site. 2. I log in and log out several times, everything is fine. 3. I go to a non-existing page and get a traceback. 4. I log out and log back in, everything works. 5. I wait and play around with it and can't get the login to fail while in Debug mode. Please help!

    Read the article

  • Draw a comparison between an integer in a specific row and a count

    - by XCoderX
    This is a follow-up question to this one: Check for specific integer in a row WHERE user = $name I want a user to be able to comment on my site for exactly five times a day. After this five times, the user has to wait 24 hours. In order to accomplish that I raise a counter in my MYSQL database, right next to the user. So where the name of the user is, there is where the counter gets raised. When it reaches 5 it should stop counting and reset after 24 hours. In order to check the time I use a timestamp. I check if the timestamp is older than 24 hours. If that is the case, the counter gets reseted (-5) and the user can comment again. In order to do that, I use the following code, however it never stops at five, my guess is that my comparison is wrong somehow: $counter = "SELECT FROM table VALUES CommentCounterReset WHERE Name = '$name'"; if(!isset($_SESSION['ts'])); { $_SESSION['ts'] = time(); } if ($counter >= 5) { if (time() - $_SESSION['ts'] <= 60*60*24){ echo "You already wrote five comments."; } else { $sql = "UPDATE table SET CommentCounterReset = CommentCounterReset-5 WHERE Name = '$name'"; } } else { $sql = "UPDATE table SET CommentCounterReset = CommentCounterReset+1 WHERE Name = '$name'"; echo "Your comment has been added."; }

    Read the article

  • Send file FTP over SSL with custom port number

    - by JM4
    I have asked the question before but in a different manner. I am trying taking form data, compiling into a temporary CSV file and trying to send over to a client via FTP over SSL (this is the only route I am interested in hearing solutions for unless there is a workaround to doing this, I cannot make changes). I have tried the following: ftp_connect - nothing happens, the page just times out ftp_ssl_connect - nothing happens, the page just times out curl library - same thing, given URL it also gives error. I am given the following information: FTPS Server IP Address TCP Port (1234) Username Password Data Directory to dump file FTP Mode: Passive very, very basic code (which I believe should initiate a connection at minimum): Code: <?php $ftp_server = "00.000.00.000"; //masked for security $ftp_port = "1234"; // masked but not 990 $ftp_user_name = "username"; $ftp_user_pass = "password"; // set up basic ssl connection $conn_id = ftp_ssl_connect($ftp_server, $ftp_port, "20"); // login with username and password $login_result = ftp_login($conn_id, $ftp_user_name, $ftp_user_pass); echo ftp_pwd($conn_id); // / echo "hello"; // close the ssl connection ftp_close($conn_id); ?> When I run this over a SmartFTP client, everything works just fine. I just can't get it to work using PHP (which is a necessity). Has anybody had success doing this in the past? I would be very interested to hear your approach.

    Read the article

  • How to customize the printing while using Window.print ?

    - by Holicreature
    i want to print a invoice and i use a print.css by media=print and when i change the current stylesheet to print.css i could able to view what i should be printing without the titles and content left aligned. But still while i'm printing there is space in the top and left and its taking up whole a4 sheet and also the whole width of the page.. But i've defined a body width of just 550 px. While i view to print preview, it takes the whole width instead of taking up 1/3 of the width.. My print.css is body { width:550px; height:450px; color:#000000; margin:0; padding:0; word-spacing:1.1pt; font-family : "Times New Roman", Times, serif; font-size : 10px; text-align:left; } a { visibility :hidden; display : none; } input{ display : none; } table { margin: 1px; text-align:left; } #list,#head,#cont,#fotter,#oth,#links,#name,li,ul,ol { display : none; } I'm printing through browser using window.print, so is there any special configuration i need to do...?

    Read the article

  • How can I make a simple path generator?

    - by user360737
    Hi all, I have a link that I have to repeat 50 times, for each folder, and I have 15 folders.the link that I have to repeat looks like this: <a href="update/images/Cars/car (x).JPG" rel="lightbox[cars]"></a> now, the jpg files are named car 1- car 50. and I would really like to be able to generate this script so that I can input the path "update/images/Cars/" the picture title (car) and the input the number of times that I need this link, and then have it spit out something that looks like this: <a href="update/images/Cars/car (1).JPG" rel="lightbox[cars]"></a> <a href="update/images/Cars/car (2).JPG" rel="lightbox[cars]"></a> <a href="update/images/Cars/car (3).JPG" rel="lightbox[cars]"></a> <a href="update/images/Cars/car (4).JPG" rel="lightbox[cars]"></a> <a href="update/images/Cars/car (5).JPG" rel="lightbox[cars]"></a> <a href="update/images/Cars/car (6).JPG" rel="lightbox[cars]"></a> and then it keeps repeating, I'm assuming this can be done with a counter, but I'm not sure. Thanks!

    Read the article

  • Set service dependencies after install

    - by Dennis
    I have an application that runs as a Windows service. It stores various things settings in a database that are looked up when the service starts. I built the service to support various types of databases (SQL Server, Oracle, MySQL, etc). Often times end users choose to configure the software to use SQL Server (they can simply modify a config file with the connection string and restart the service). The problem is that when their machine boots up, often times SQL Server is started after my service so my service errors out on start up because it can't connect to the database. I know that I can specify dependencies for my service to help guide the Windows service manager to start the appropriate services before mine. However, I don't know what services to depend upon at install time (when my service is registered) since the user can change databases later on. So my question is: is there a way for the user to manually indicate the service dependencies based on the database that they are using? If not, what is the proper design approach that I should be taking? I've thought about trying to do something like wait 30 seconds after my service starts up before connecting to the database but this seems really flaky for various reasons. I've also considered trying to "lazily" connect to the database; the problem is that I need a connection immediately upon start up since the database contains various pieces of vital info that my service needs when it first starts. Any ideas?

    Read the article

  • How to do an additional search on archive in rails if record not found, by extending model?

    - by Nick Gorbikoff
    Hello, I was wondering if somebody knows an elegant solution to the following: Suppose I have a table that holds orders, with a bunch of data. So I'm at 1M records, and searches begin to take time. So I want to speed it up by archiving some data that is more than 3 years old - saving it into a table called orders-archive, and then purging them from the orders table. So if we need to research something or customer wants to pull older information - they still can, but 99% of the lookups are done on the orders no older than a year and a half - so there is no reason to keep looking through older data all the time. These move & purge operations can be then croned to be done on a weekly basis. I already did some tests and I know that I will slash my search times by about 4 times. So far so good, right? However I was thinking about how to implement older archival lookups and the only reasonable thing I can think of is some sort of if-else If not found in orders, do a search in orders-archive. However - I have about 20 tables that I want to archive and god knows how many searches / finds are done through out the code, that I don't want to modify. So I was wondering if there is an elegant rails-way solution to this problem, by extending a model somehow? Has anyone dealt with similar case before? Thank you.

    Read the article

  • SDK Platform Tools component missing - Similar to Android Eve below

    - by Hertfordkc
    Ubuntu Linux 10.04//Eclipse 3.5.2 I'm new to Eclipse and Android. Eclipse is up and running simple Jave apps OK. I moved on to downloading the Android SDK starter package, which seemed to go OK. Ran the SDK manager and downladed Platforms 7,8 & 9. Installed the ADT package in Eclipse. I've tried to load the SDK path into the Eclipse Preferences, but it won't retain the path. After restart, Elipse says it can't find SDK package. Also,one message said that the (revision?) number of the ADT couldn't be found. I've reinstalled Eclipse a couple of times, and then gone through the SDK & ADT download procedures a couple of times and am stuck. Any suggestions will be appreciated. Hertfordkc Stupid question caused by not thoroughly reading the Android Developers Guide and the tutorials before trying to start a project. Don't know why I didn't get a message about a missing XML file.

    Read the article

  • Simplifying for-if messes with better structure?

    - by HH
    # Description: you are given a bitwise pattern and a string # you need to find the number of times the pattern matches in the string # any one liner or simple pythonic solution? import random def matchIt(yourString, yourPattern): """find the number of times yourPattern occurs in yourString""" count = 0 matchTimes = 0 # How can you simplify the for-if structures? for coin in yourString: #return to base if count == len(pattern): matchTimes = matchTimes + 1 count = 0 #special case to return to 2, there could be more this type of conditions #so this type of if-conditionals are screaming for a havoc if count == 2 and pattern[count] == 1: count = count - 1 #the work horse #it could be simpler by breaking the intial string of lenght 'l' #to blocks of pattern-length, the number of them is 'l - len(pattern)-1' if coin == pattern[count]: count=count+1 average = len(yourString)/matchTimes return [average, matchTimes] # Generates the list myString =[] for x in range(10000): myString= myString + [int(random.random()*2)] pattern = [1,0,0] result = matchIt(myString, pattern) print("The sample had "+str(result[1])+" matches and its size was "+str(len(myString))+".\n" + "So it took "+str(result[0])+" steps in average.\n" + "RESULT: "+str([a for a in "FAILURE" if result[0] != 8])) # Sample Output # # The sample had 1656 matches and its size was 10000. # So it took 6 steps in average. # RESULT: ['F', 'A', 'I', 'L', 'U', 'R', 'E']

    Read the article

  • heroku logs --ps run showign nothing

    - by Zarne Dravitzki
    I have two running apps on heroku staging and production. They are near identical enviornments. (Staging has extra configs IE RailsFootnotes, Bullet gem) When I run heroku logs --ps run --app jl-staging Returns as logs like 2012-08-30T01:30:42+00:00 heroku[run.1]: Starting process with command `bundle exec rake jewellover:warn_users` This log is a Task set to run with Heroku Schedular Free. Everything Works perfect but when I do the same with heroku logs --ps run --app jl-production There are no results. No heroku[run.1] process logs. Both environments have the same scheduled tasks, albeit at different times but none the less both run scheduled tasks at specified times. Is there something im missing about heroku[run.1] processes in production env? Does heroku only keep the -ps logs for a certain amount of time? It seems to show less activity than the normal logs. Maybe only show 24hrs worth of logs rather than Last 100 logs... I need to log and debug the [run.1] process from the production env... specifically the jewellover:warn_users task. any ideas?

    Read the article

  • Problems with Threading in Python 2.5, KeyError: 51, Help debugging?

    - by vignesh-k
    I have a python script which runs a particular script large number of times (for monte carlo purpose) and the way I have scripted it is that, I queue up the script the desired number of times it should be run then I spawn threads and each thread runs the script once and again when its done. Once the script in a particular thread is finished, the output is written to a file by accessing a lock (so my guess was that only one thread accesses the lock at a given time). Once the lock is released by one thread, the next thread accesses it and adds its output to the previously written file and rewrites it. I am not facing a problem when the number of iterations is small like 10 or 20 but when its large like 50 or 150, python returns a KeyError: 51 telling me element doesn't exist and the error it points out to is within the lock which puzzles me since only one thread should access the lock at once and I do not expect an error. This is the class I use: class errorclass(threading.Thread): def __init__(self, queue): self.__queue=queue threading.Thread.__init__(self) def run(self): while 1: item = self.__queue.get() if item is None: break result = myfunction() lock = threading.RLock() lock.acquire() ADD entries from current thread to entries in file and REWRITE FILE lock.release() queue = Queue.Queue() for i in range(threads): errorclass(queue).start() for i in range(desired iterations): queue.put(i) for i in range(threads): queue.put(None) Python returns with KeyError: 51 for large number of desired iterations during the adding/write file operation after lock access, I am wondering if this is the correct way to use the lock since every thread has a lock operation rather than every thread accessing a shared lock? What would be the way to rectify this?

    Read the article

  • Reduce durability in MySQL for performance

    - by Paul Prescod
    My site occasionally has fairly predictable bursts of traffic that increase the throughput by 100 times more than normal. For example, we are going to be featured on a television show, and I expect in the hour after the show, I'll get more than 100 times more traffic than normal. My understanding is that MySQL (InnoDB) generally keeps my data in a bunch of different places: RAM Buffers commitlog binary log actual tables All of the above places on my DB slave This is too much "durability" given that I'm on an EC2 node and most of the stuff goes across the same network pipe (file systems are network attached). Plus the drives are just slow. The data is not high value and I'd rather take a small chance of a few minutes of data loss rather than have a high probability of an outage when the crowd arrives. During these traffic bursts I would like to do all of that I/O only if I can afford it. I'd like to just keep as much in RAM as possible (I have a fair chunk of RAM compared to the data size that would be touched over an hour). If buffers get scarce, or the I/O channel is not too overloaded, then sure, I'd like things to go to the commitlog or binary log to be sent to the slave. If, and only if, the I/O channel is not overloaded, I'd like to write back to the actual tables. In other words, I'd like MySQL/InnoDB to use a "write back" cache algorithm rather than a "write through" cache algorithm. Can I convince it to do that? If this is not possible, I am interested in general MySQL write-performance optimization tips. Most of the docs are about optimizing read performance, but when I get a crowd of users, I am creating accounts for all of them, so that's a write-heavy workload.

    Read the article

  • How does SVN store commit time

    - by Salman
    I am working on a project that involves extracting details from a SVN server using SVNKit. My project is already complete and has been working we for a while now. During the testing, I noticed something rather very strange. the Commit Times my extract data seems is alway different from whats there in SVN Logs. I couldnt find any code in my project that could be inducing this difference but now I am looking as to how SVN server stores the Commit time in itself. As we have developer working from different part of the world thus resulting in different timezones, I was thinking that SVN might be storing time after converting them to GMT or timezone of the system on which SVN server is running. But that does not seem to be happening. Instead the times are stored as per the time when the commit was done and in that local timezone itself. I have been unable to find any substantial document on internet to support my theory so far. Can anybody in brief explain as how SVN store the Commit Time for each change? Documentaion links referring to this will be of great help.

    Read the article

  • How to get to the key name of a referenced entity property from an entity instance without a datastore read in google app engine?

    - by Sumeet Pareek
    Consider I have the following models - class Team(db.Model): # say I have just 5 teams name = db.StringProperty() class Player(db.Model): # say I have thousands of players name = db.StringProperty() team = db.ReferenceProperty(Team, collection_name="player_set") Key name for each Team entity = 'team_' , and for each Player entity = 'player_' By some prior arrangement I have a Team entity's (key_name, name) mapping available to me. For example (team_01, United States Of America), (team_02, Russia) etc I have to show all the players and their teams on a page. One way of doing this would be - players = Player.all().fetch(1000) # This is 1 DB read for player in players: # This will iterate 1000 times self.response.out.write(player.name) # This is obviously not a DB read self.response.out.write(player.team.name) #This is a total of 1x1000 = 1000 DB reads That is a 1001 DB reads for a silly thing. The interesting part is that when I do a db.to_dict() on players, it shows that for every player in that list there is 'name' of the player and there is the 'key_name' of the team available too. So how can I do the below ?? players = Player.all().fetch(1000) # This is 1 DB read for player in players: # This will iterate 1000 times self.response.out.write(player.name) # This is obviously not a DB read self.response.out.write(team_list[player.<SOME WAY OF GETTING TEAM KEY NAME>]) # Here 'team_list' already has (key_name, name) for all 5 teams I have been struggling with this for a long time. Have read every available documentation. I could just hug the person that can help me here :-) Disclaimer: The above problem description is not a real scenario. It is a simplified arrangement that represents my problem exactly. I have run into it in a rater complex and big GAE appication.

    Read the article

  • Very simple python functions takes spends long time in function and not subfunctions

    - by John Salvatier
    I have spent many hours trying to figure what is going on here. The function 'grad_logp' in the code below is called many times in my program, and cProfile and runsnakerun the visualize the results reveals that the function grad_logp spends about .00004s 'locally' every call not in any functions it calls and the function 'n' spends about .00006s locally every call. Together these two times make up about 30% of program time that I care about. It doesn't seem like this is function overhead as other python functions spend far less time 'locally' and merging 'grad_logp' and 'n' does not make my program faster, but the operations that these two functions do seem rather trivial. Does anyone have any suggestions on what might be happening? Have I done something obviously inefficient? Am I misunderstanding how cProfile works? def grad_logp(self, variable, calculation_set ): p = params(self.p,self.parents) return self.n(variable, self.p) def n (self, variable, p ): gradient = self.gg(variable, p) return np.reshape(gradient, np.shape(variable.value)) def gg(self, variable, p): if variable is self: gradient = self._grad_logps['x']( x = self.value, **p) else: gradient = __builtin__.sum([self._pgradient(variable, parameter, value, p) for parameter, value in self.parents.iteritems()]) return gradient

    Read the article

  • Survey statistic diagram ideas

    - by Nort
    Hey everyone, I've got some homework tasks in topic surveys and diagrams. The first task is to normalize the input of a survey, because the structure of the data is changing from time-to-time. So there are three types of surveys: static fields, where text is stored dynamic ones, where the user can select one option and multiselect fields, where the user can select multiple options So I'm not really a statistics guy, so I have really no idea what I can do with that incomming data. So the data I have is stored in an orbital XML file from there I can easily get how man times a survey was filled, and how many times a field was filled, so I can (for eg on a pie chart show the relation of filled or not filled). The second idea is to show the relation between the content of a multi option element using a bar chart or so. In case of the multi option elements I've got the idea to show data in implication of one option. But the question is, what could be shown? The other problem are the static elements (text fields and so). What data could be represented from a single field? The data in the XML field is collected from 2001 to 2005 So maybe I can work with the dates of the surveys, but as I said, i don't really know how to process the data, to collect as much data as possible.

    Read the article

  • How to get Amazon s3 PHP SDK working?

    - by JakeRow123
    I'm trying to set up s3 for the first time and trying to run the sample file that comes with the PHP sdk that creates a bucket and attempts to upload some demo files to it. But this is the error I am getting: The difference between the request time and the current time is too large. I read on another question on SO that this is because Amazon determines a valid request by comparing the times between the server and the client, that the 2 must be within a 15 min span of one another. Now here is the problem. My laptop's time is 12:30AM June 8, 2012 at the moment. On my server I created a file called servertime.php and placed this code in that file: <?php print strftime('%c'); ?> and the output is: Fri Jun 8 00:31:22 2012 It looks like the day is correct but I don't know what to make of 00:31:22. In any case, how is it possible to always make sure the time between the client and server is within a 15 minute window of one another. What if I have a user in China who wishes to upload a file on my site which uses s3 for the cdn. Then the time difference would be over a day. How can I make sure all my user's times are within 15 minutes of my server time? What if the user is in the U.S. but the time on their machine is misconfigured. Basically how to get s3 bucket creation and upload to work?

    Read the article

  • Best Practice: Protecting Personally Identifiable Data in a ASP.NET / SQL Server 2008 Environment

    - by William
    Thanks to a SQL injection vulnerability found last week, some of my recommendations are being investigated at work. We recently re-did an application which stores personally identifiable information whose disclosure could lead to identity theft. While we read some of the data on a regular basis, the restricted data we only need a couple of times a year and then only two employees need it. I've read up on SQL Server 2008's encryption function, but I'm not convinced that's the route I want to go. My problem ultimately boils down to the fact that we're either using symmetric keys or assymetric keys encrypted by a symmetric key. Thus it seems like a SQL injection attack could lead to a data leak. I realize permissions should prevent that, permissions should also prevent the leaking in the first place. It seems to me the better method would be to asymmetrically encrypt the data in the web application. Then store the private key offline and have a fat client that they can run the few times a year they need to access the restricted data so the data could be decrypted on the client. This way, if the server get compromised, we don't leak old data although depending on what they do we may leak future data. I think the big disadvantage is this would require re-writing the web application and creating a new fat application (to pull the restricted data). Due to the recent problem, I can probably get the time allocated, so now would be the proper time to make the recommendation. Do you have a better suggestion? Which method would you recommend? More importantly why?

    Read the article

  • Error: '$viewMap[...]' is null or not an object

    - by DK
    My jQuery/Javascript knowledge is limited I'm afraid. I have a "how did you hear about us" dropdown on a form. However, I get the following Javascript error on change: Error: '$viewMap[...]' is null or not an object My dropdown looks like this: <select onchange="setSourceID(this.value)" name="sourceID" id="sourceID" class="required"> <option value="" selected="selected">Please choose&#8230;</option> <option value="National Paper">National Paper</option> <option value="Magazine">Magazine</option> <option value="Regional Paper">Regional Paper</option> <option value="9682">Internet Search</option> <option value="9684">Recommendation</option> <option value="9683">Other</option> </select> <!-- some additional dropdowns below that appear based on what's selected above --> <select onchange="setSourceID(this.value)" name="referrerName[]" id="referrer1" class="smartField"> <option value="" selected="selected">Please choose&#8230;</option> <option value="The Times">The Times</option> etc... </select> and so on... My Javascript looks like this: $(document).ready(function() { $('.smartField').hide(); $.viewMap = { '' : $([]), 'National Paper' : $('#referrer1'), 'Magazine' : $('#referrer2'), 'Regional Paper' : $('#referrer3') //'Internet Search' : $('#referrer4'), //'Recommendation' : $('#referrer5'), //'Other' : $('#referrer6') }; $("#sourceID").bind(($.browser.msie ? "click" : "change"), function () { $.each($.viewMap, function() { this.hide(); }); // hide all $.viewMap[$(this).val()].show(); // show current }); }); Does anybody have any idea where I'm going wrong? Any help very much appreciated.

    Read the article

  • MySQL - What is wrong with this query or my database? Terrible performance.

    - by Moss
    SELECT * from `employees` a LEFT JOIN (SELECT phone1 p1, count(*) c, FROM `employees` GROUP BY phone1) b ON a.phone1 = b.p1; I'm not sure if it is this query in particular that has the problem. I have been getting terrible performance in general with this database. The table in question has 120,000 rows. I have tried this particular query remotely and locally with the MyISAM and InnoDB engines, with different types of joins, and with and without an index on phone1. I can get this to complete in about 4 minutes on a 10,000 row table successfully but performance drops exponentially with larger tables. Remotely it will lose connection to the server and locally it brings my system to its knees and seems to go on forever. This query is only a smaller step I was trying to do when a larger query couldn't complete. Maybe I should explain the whole scenario. I have one big flat ugly table that lists a bunch of people and their contact info and the info of the companies they work for. I'm trying to normalize the database and intelligently determine which phone numbers apply to individual people and which apply to an office location. My reasoning is that if a phone number occurs multiple times and the number of occurrence equals the number of times that the street address it is attached to occurs then it must be an office number. So the first step is to count each phone number grouping by phone number. Normally if you just use COUNT()...GROUP BY it will only list the first record it finds in that group so I figured I have to join the full table to the count table where the phone number matches. This does work but as I said I can't successfully complete it on any table much larger than 10,000 rows. This seems pathetic and this doesn't seem like a crazy query to do. Is there a better way to achieve what I want or do I have to break my large table into 12 pieces or is there something wrong with the table or db?

    Read the article

  • Understanding REST: is GET fundamentally incompatible with any "number of views" counter?

    - by cocotwo
    I'm trying to understand REST. Under REST a GET must not trigger something transactional on the server (this is a definition everybody agrees upon, it is fundamental to REST). So imagine you've got a website like stackoverflow.com (I say like so if I got the underlying details of SO wrong it doesn't change anything to my question), where everytime someone reads a question, using a GET, there's also some display showing "This question has been read 256 times". Now someone else reads that question. The counter now is at 257. The GET is transactional because the number of views got incremented and is now incremented again. The "number of views" is incremented in the DB, there's no arguing about that (for example on SO the number of time any question has been viewed is always displayed). So, is a REST GET fundamentally incompatible with any kind of "number of views" like functionality in a website? So should it want to be "RESTFUL", should the SO main page either stop display plain HTML links that are accessed using GETs or stop displaying the "this question has been viewed x times"? Because incrementing a counter in a DB is transactional and hence "unrestful"? EDIT just so that people Googling this can get some pointers: From http://www.xfront.com/REST-Web-Services.html : 4. All resources accessible via HTTP GET should be side-effect free. That is, the request should just return a representation of the resource. Invoking the resource should not result in modifying the resource. Now to me if the representation contains the "number of views", it is part of the resource [and in SO the "number of views" a question has is a very important information] and accessing it definitely modifies the resource. This is in sharp contrast with, say, a true RESTFUL HTTP GET like the one you can make on an Amazon S3 resource, where your GET is guaranteed not to modify the resource you get back. But then I'm still very confused.

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >